diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Photoshop Cs 9 Free Free Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Photoshop Cs 9 Free Free Download.md deleted file mode 100644 index 95a05c3f95af80d010f13e994a71ce0fabf5d238..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Photoshop Cs 9 Free Free Download.md +++ /dev/null @@ -1,9 +0,0 @@ -
-

photoshop is considered one of the most powerful image editing software packages available. it is used by digital photographers, graphic designers, and just about anyone who has an interest in editing images. photoshop has the capability to change almost any aspect of a photo, making it an impressive software package.

-

adobe photoshop cs 9 free download


Download ►►► https://imgfil.com/2uxX4P



-

adobe photoshop is one of the most popular software programs available. it is used for a variety of tasks including retouching your images, creating graphics and images, creating websites, and much more. the good thing about photoshop is that the program is very easy to use and has a lot of features that allow users to edit any type of file or project. the application is also capable of handling enormous files.

-

as you know, photoshop is the top photoshopping software for the mac. but it has been made available for os x. the demo version of photoshop cs6 can be downloaded through the website for mac os x. the latest versions of photoshop elements and photoshop cs6 are on sale for $39.99 and $179.99 respectively. so if you are not the most technology savvy person, fear not. the software is fairly simple to use and it can be installed on a mac os x 10.6 and up or windows xp and up.

-

the latest version of photoshop cs6 (adobe photoshop cs6 + adobe photoshop cs6 extended + adobe photoshop cs6 essentials) are on sale for $119.99, $399.99, $219.99 respectively. they can be downloaded directly at the adobe website, otherwise you can also choose to pay through the adobe acrobat connect site. if you have adobe acrobat you can get the software to run on your computer. you also get access to acrobat reader which lets you read, save, print, and annotate pdf files. the adobe photoshop cs6 gives you the ability to work and create images on different layers or files which allows you to combine, edit or reposition any number of images and text on layers. you can also crop, adjust or rotate the layer contents. it lets you share your finished image or component with others. you can read the help files or you can simply make use of the extensive menus to access the software. the whole software is free of all adware, malware or spyware.

-

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Free !!HOT!! Download 007 Facebook Hack V1.0 With Full Cracked.md b/spaces/1gistliPinn/ChatGPT4/Examples/Free !!HOT!! Download 007 Facebook Hack V1.0 With Full Cracked.md deleted file mode 100644 index b62726508f6ab59e163066b50f3d7d23284f5010..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Free !!HOT!! Download 007 Facebook Hack V1.0 With Full Cracked.md +++ /dev/null @@ -1,108 +0,0 @@ -
-

Free Download 007 Facebook Hack v1.0 with Full Cracked

-

If you are looking for a way to hack into any Facebook account, you have come to the right place. In this article, we will show you how to use 007 Facebook Hack v1.0 with Full Cracked, a powerful and easy-to-use tool that can help you spy on anyone's Facebook activities.

-

What is 007 Facebook Hack v1.0 with Full Cracked?

-

007 Facebook Hack v1.0 with Full Cracked is a software that allows you to hack any Facebook account by simply entering the email address or username of the target. You don't need to know the password or any other details of the account. The software will automatically retrieve the login information and display it on your screen.

-

free download 007 facebook hack v1.0 with full cracked


Download Zip ⚙⚙⚙ https://imgfil.com/2uy0zG



-

With 007 Facebook Hack v1.0 with Full Cracked, you can access the private messages, photos, videos, friends list, wall posts, comments, likes, groups, events, and more of any Facebook user. You can also change the password, profile picture, status, and other settings of the hacked account.

-

Why use 007 Facebook Hack v1.0 with Full Cracked?

-

There are many reasons why you might want to use 007 Facebook Hack v1.0 with Full Cracked. For example, you might want to:

- -

How to use 007 Facebook Hack v1.0 with Full Cracked?

-

Using 007 Facebook Hack v1.0 with Full Cracked is very simple and straightforward. Just follow these steps:

-
    -
  1. Download 007 Facebook Hack v1.0 with Full Cracked from the link below.
  2. -
  3. Extract the zip file and run the setup.exe file to install the software on your computer.
  4. -
  5. Open the software and enter the email address or username of the Facebook account you want to hack.
  6. -
  7. Click on the "Hack" button and wait for a few seconds.
  8. -
  9. The software will display the password and other details of the hacked account on your screen.
  10. -
  11. Enjoy!
  12. -
-

Where to download 007 Facebook Hack v1.0 with Full Cracked?

-

You can download 007 Facebook Hack v1.0 with Full Cracked for free from the link below. The software is safe and virus-free. It works on Windows XP, Vista, 7, 8, 10 and Mac OS X. It is compatible with all browsers and devices that support Facebook.

-

Download 007 Facebook Hack v1.0 with Full Cracked Here

-

Conclusion

-

007 Facebook Hack v1.0 with Full Cracked is a powerful and easy-to-use tool that can help you hack any Facebook account in minutes. You can use it for various purposes such as monitoring, recovering, protecting, or having fun with your Facebook accounts. You can download it for free from the link above and enjoy hacking!

-

Is 007 Facebook Hack v1.0 with Full Cracked legal?

-

Before you download and use 007 Facebook Hack v1.0 with Full Cracked, you might be wondering if it is legal or not. The answer is: it depends. Hacking someone's Facebook account without their consent is illegal and unethical in most countries. You could face legal consequences if you are caught or reported by the victim or Facebook. Therefore, we do not recommend or endorse using this tool for malicious purposes.

-

However, there are some situations where using 007 Facebook Hack v1.0 with Full Cracked might be legal or acceptable. For example, if you are hacking your own account that you lost access to, or if you have the permission of the account owner to hack their account for educational or testing purposes. In these cases, you are not violating anyone's privacy or rights, and you are using the tool responsibly and ethically.

-

-

What are the advantages of 007 Facebook Hack v1.0 with Full Cracked?

-

007 Facebook Hack v1.0 with Full Cracked has many advantages over other hacking tools available on the internet. Some of them are:

- -

What are the disadvantages of 007 Facebook Hack v1.0 with Full Cracked?

-

Despite its many advantages, 007 Facebook Hack v1.0 with Full Cracked also has some disadvantages that you should be aware of before using it. Some of them are:

- -

How to download 007 Facebook Hack v1.0 with Full Cracked?

-

Downloading 007 Facebook Hack v1.0 with Full Cracked is very easy and fast. You don't need to register or fill any surveys to get this tool. You just need to follow these simple steps:

-
    -
  1. Click on the download link below to go to the download page.
  2. -
  3. Choose one of the available download options and click on the download button.
  4. -
  5. Wait for the download to complete and save the file on your computer or device.
  6. -
  7. Extract the zip file and run the setup.exe file to install the software on your computer or device.
  8. -
  9. Enjoy!
  10. -
-

Note: Some antivirus programs might detect 007 Facebook Hack v1.0 with Full Cracked as a virus or malware. This is a false positive and you can safely ignore it. The software is clean and harmless.

-

How to update 007 Facebook Hack v1.0 with Full Cracked?

-

007 Facebook Hack v1.0 with Full Cracked is constantly updated by its developers to ensure its functionality and compatibility with the latest Facebook updates and security measures. You don't need to manually update this tool as it will automatically check for updates and download them whenever they are available.

-

However, if you want to manually check for updates or download the latest version of 007 Facebook Hack v1.0 with Full Cracked, you can do so by following these steps:

-
    -
  1. Open the software and click on the "About" button on the top right corner.
  2. -
  3. Click on the "Check for Updates" button and wait for a few seconds.
  4. -
  5. If there is an update available, click on the "Download Update" button and wait for the download to complete.
  6. -
  7. Run the update.exe file and follow the instructions to install the update on your computer or device.
  8. -
  9. Restart the software and enjoy!
  10. -
-

How to uninstall 007 Facebook Hack v1.0 with Full Cracked?

-

If you want to uninstall 007 Facebook Hack v1.0 with Full Cracked from your computer or device, you can do so by following these steps:

-
    -
  1. Go to the Start menu and click on Control Panel.
  2. -
  3. Click on Programs and Features or Add or Remove Programs depending on your Windows version.
  4. -
  5. Find 007 Facebook Hack v1.0 with Full Cracked in the list of installed programs and click on it.
  6. -
  7. Click on Uninstall or Remove and follow the instructions to uninstall the software from your computer or device.
  8. -
  9. Delete any leftover files or folders related to 007 Facebook Hack v1.0 with Full Cracked from your computer or device.
  10. -
-

What are the alternatives to 007 Facebook Hack v1.0 with Full Cracked?

-

If you are looking for other ways to hack Facebook accounts, you might want to consider some of the alternatives to 007 Facebook Hack v1.0 with Full Cracked. Some of them are:

- -

However, these methods are not as easy or reliable as 007 Facebook Hack v1.0 with Full Cracked. They require more time, effort, skill, and resources to execute. They also have more risks and limitations than 007 Facebook Hack v1.0 with Full Cracked.

-

What are the testimonials of 007 Facebook Hack v1.0 with Full Cracked?

-

Many users have tried and tested 007 Facebook Hack v1.0 with Full Cracked and have shared their positive feedback and reviews about this tool. Here are some of the testimonials from real users who have used 007 Facebook Hack v1.0 with Full Cracked:

-
-

"I was able to hack my girlfriend's Facebook account and found out that she was cheating on me with my best friend. Thanks to 007 Facebook Hack v1.0 with Full Cracked, I was able to confront them and end the relationship." - John, USA

-
-
-

"I forgot my Facebook password and I couldn't access my email or phone number to reset it. I was desperate to get back into my account because I had important messages and photos there. Luckily, I found 007 Facebook Hack v1.0 with Full Cracked and it helped me recover my account in minutes." - Lisa, UK

-
-
-

"I wanted to prank my friend by changing his profile picture and status to something funny. I used 007 Facebook Hack v1.0 with Full Cracked to hack his account and it worked like a charm. He was so confused and angry when he saw his account. It was hilarious." - Kevin, Canada

-
-

Conclusion

-

007 Facebook Hack v1.0 with Full Cracked is a powerful and easy-to-use tool that can help you hack any Facebook account in minutes. You can use it for various purposes such as monitoring, recovering, protecting, or having fun with your Facebook accounts. You can download it for free from the link below and enjoy hacking!

-

However, you should also be aware of the legal and ethical implications of using this tool. Hacking someone's Facebook account without their consent is illegal and unethical in most cases. You could face legal consequences if you are caught or reported by the victim or Facebook. Therefore, we do not recommend or endorse using this tool for malicious purposes.

-

You should also be careful of the risks and dangers of using this tool. Even if you are hacking your own account or someone else's account with their permission, you could still expose yourself or them to potential threats from hackers, scammers, stalkers, or other malicious actors who might try to access or misuse the hacked account.

-

Finally, you should also know that this tool is not guaranteed or foolproof. The tool might not work on some accounts that have strong security measures or verification methods in place. The tool might also fail to hack the account if the target changes their password or email address during the hacking process.

-

Therefore, you should use this tool responsibly and ethically, and at your own risk. We hope you found this article helpful and informative. If you have any questions or comments, please feel free to leave them below. Happy hacking!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bed Wars Mod APK How to Get Unlimited Money and Gcubes in the Best Block Game for Android.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bed Wars Mod APK How to Get Unlimited Money and Gcubes in the Best Block Game for Android.md deleted file mode 100644 index acba8b933c45a4ff879fed470cf3d220ec15db31..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bed Wars Mod APK How to Get Unlimited Money and Gcubes in the Best Block Game for Android.md +++ /dev/null @@ -1,131 +0,0 @@ - -

How to Download Bed Wars Mod APK

-

Do you love playing multiplayer games with your friends? Do you want to experience an exciting and addictive game that features four levels, each with unique objectives and strategies? If you answered yes, then you should try Bed Wars. Bed Wars is a mobile game that lets you team up with other players and protect your bed from being destroyed by your enemies. You can also collect resources, build bridges, upgrade weapons, and attack other beds. Sounds fun, right?

-

But what if we tell you that you can make this game even more fun by downloading Bed Wars Mod APK? This is a modified version of the game that gives you unlimited money and gcubes, which are the in-game currencies. With these resources, you can buy anything you want in the game without worrying about running out. You can also unlock all the skins, items, maps, modes, and more. This way, you can enjoy Bed Wars to the fullest.

-

how to download bed wars mod apk


Download Zip ——— https://urlin.us/2uSU8h



-

So how do you download Bed Wars Mod APK on your Android device? Don't worry, we got you covered. In this article, we will show you how to download and install this amazing modded game in just a few simple steps. We will also give you some tips on how to play Bed Wars Mod APK and have a blast with your friends. Let's get started!

-

What is Bed Wars?

-

Bed Wars is a popular mobile game developed by Blockman GO Studio. It is inspired by the Minecraft mini-game of the same name. The game has four levels: Solo, Duo, Trio, and Squad.

In each level, you will be assigned to a team with a color. Your team will have a bed that you need to protect from being destroyed by other teams. If your bed is destroyed, you will not be able to respawn and you will be eliminated from the game. The last team standing wins the game.

-

To protect your bed, you need to collect resources from the islands. There are three types of resources: iron, gold, and diamonds. Iron and gold can be used to buy items from the shop, such as blocks, weapons, armor, tools, and potions. Diamonds can be used to upgrade your team's abilities, such as sharpness, protection, haste, and heal pool.

-

You can also build bridges to connect your island to other islands. This way, you can access more resources, attack other beds, or defend your own bed. But be careful, as other teams can also use your bridges to invade your island. You need to be strategic and cooperative with your teammates to win the game.

-

Why Download Bed Wars Mod APK?

-

Bed Wars is a fun and addictive game that you can play for hours with your friends. However, it can also be frustrating and challenging if you don't have enough money and gcubes to buy the items and upgrades you need. You might also get bored of playing the same maps and modes over and over again.

-

That's why downloading Bed Wars Mod APK is a great idea. This is a modified version of the game that gives you unlimited money and gcubes, which are the in-game currencies. With these resources, you can buy anything you want in the game without worrying about running out. You can also unlock all the skins, items, maps, modes, and more. This way, you can enjoy Bed Wars to the fullest.

-

Some of the features of Bed Wars Mod APK are:

- -

As you can see, Bed Wars Mod APK is a must-have for any fan of the game. It will make your gaming experience more fun and exciting. You will be able to customize your character, equip yourself with the best weapons and armor, explore different maps and modes, and dominate the game with your friends.

-

How to install bed wars mod apk on android
-Bed wars mod apk unlimited money and gcubes download
-Bed wars mod apk latest version free download
-How to play bed wars mod apk online with friends
-Bed wars mod apk solo, duo, trio and squad modes
-How to get bed wars mod apk for pc
-Bed wars mod apk hack and cheats
-Bed wars mod apk no root required
-Bed wars mod apk features and gameplay
-How to update bed wars mod apk to the newest version
-Bed wars mod apk review and rating
-Bed wars mod apk download link and instructions
-How to uninstall bed wars mod apk from your device
-Bed wars mod apk tips and tricks
-Bed wars mod apk vs original bed wars game
-How to fix bed wars mod apk not working or crashing
-Bed wars mod apk best strategies and tactics
-Bed wars mod apk compatible devices and requirements
-Bed wars mod apk alternatives and similar games
-How to contact bed wars mod apk developer and support
-How to join bed wars mod apk community and forums
-Bed wars mod apk pros and cons
-Bed wars mod apk bugs and glitches
-Bed wars mod apk custom maps and skins
-How to create your own bed wars mod apk server
-How to backup and restore bed wars mod apk data
-Bed wars mod apk frequently asked questions and answers
-Bed wars mod apk gameplay videos and screenshots
-How to earn free gcubes in bed wars mod apk
-Bed wars mod apk changelog and updates history
-How to report bed wars mod apk issues and feedback
-Bed wars mod apk privacy policy and terms of service
-How to enable bed wars mod apk notifications and permissions
-Bed wars mod apk achievements and leaderboards
-How to share bed wars mod apk with your friends
-Bed wars mod apk size and download time
-How to optimize bed wars mod apk performance and battery usage
-Bed wars mod apk sound effects and music settings
-How to customize bed wars mod apk controls and interface
-Bed wars mod apk languages and translations

-

How to Download Bed Wars Mod APK on Android?

-

Now that you know why you should download Bed Wars Mod APK, you might be wondering how to do it. Don't worry, it's very easy and simple. All you need is an Android device with at least 4 GB of RAM and 100 MB of free storage space. Then, follow these steps:

-

Step 1: Allow Unknown Apps on Android

-

The first thing you need to do is to allow unknown apps on your Android device. This means that you will be able to install apps that are not from the Google Play Store. To do this, go to your device settings and look for the security or privacy option. Then, find the unknown sources or install unknown apps option and enable it.

-

This will allow you to install Bed Wars Mod APK on your device without any problems. However, make sure that you only download apps from trusted sources and websites. Otherwise, you might end up installing malware or viruses on your device.

-

Step 2: Install an Android File Manager

-

The next thing you need to do is to install an Android file manager app on your device. This is an app that will help you find and manage the files on your device. You will need this app to locate and install the Bed Wars Mod APK file that you will download later.

-

There are many file manager apps that you can choose from, such as ES File Explorer, Astro File Manager, or Solid Explorer. You can download any of them from the Google Play Store for free. Once you have installed a file manager app on your device, open it and grant it the necessary permissions.

-

Step 3: Download the APK Installer From Your Android

-

The next step is to download the Bed Wars Mod APK file from your Android device. To do this, open your web browser and go to this link: . This is a reputable website where you can download the latest version of Bed Wars Mod APK for free.

-

Once you are on the website, scroll down until you see the download button. Tap on it and wait for the download to start. The file size is about 98 MB, so it might take a few minutes depending on your internet speed.

-

Step 4: Transfer the APK Installer via USB (Optional)

-

If you prefer, you can also download the Bed Wars Mod APK file from your computer and transfer it to your Android device via USB cable. This might be faster and more convenient for some users. To do this, follow these steps:

- -

Step 5: Install the APK File on Your Device

-

The final step is to install the Bed Wars Mod APK file on your device. To do this, follow these steps:

- -

How to Play Bed Wars Mod APK?

-

Now that you have installed Bed Wars Mod APK on your device, you might be wondering how to play it. Don't worry, it's very easy and simple. All you need to do is follow these steps:

- -

Conclusion

-

Bed Wars is a fun and addictive game that you can play with your friends. However, it can also be frustrating and challenging if you don't have enough money and gcubes to buy the items and upgrades you need. That's why downloading Bed Wars Mod APK is a great idea. This is a modified version of the game that gives you unlimited money and gcubes, which are the in-game currencies. With these resources, you can buy anything you want in the game without worrying about running out. You can also unlock all the skins, items, maps, modes, and more. This way, you can enjoy Bed Wars to the fullest.

-

In this article, we showed you how to download and install Bed Wars Mod APK on your Android device in just a few simple steps. We also gave you some tips on how to play Bed Wars Mod APK and have a blast with your friends. We hope that you found this article helpful and informative. If you did, please share it with your friends who might also be interested in playing Bed Wars Mod APK. Thank you for reading!

-

Frequently Asked Questions

-

Q: Is Bed Wars Mod APK safe to download?

-

A: Yes, Bed Wars Mod APK is safe to download as long as you download it from a trusted source and website. However, make sure that you scan the APK file with an antivirus app before installing it on your device. This way, you can avoid any potential malware or viruses that might harm your device.

-

Q: Do I need to root my device to use Bed Wars Mod APK?

-

A: No, you don't need to root your device to use Bed Wars Mod APK. This modded game works fine on any Android device without requiring any root access or permissions. Just follow the steps above and enjoy Bed Wars Mod APK without any hassle.

-

Q: Can I play Bed Wars Mod APK online with other players?

-

A: Yes, you can play Bed Wars Mod APK online with other players who are also using the modded version of the game. However, you might not be able to play with players who are using the original version of the game from the Google Play Store. This is because the modded game has different features and settings that might not be compatible with the original game. Therefore, we recommend that you play Bed Wars Mod APK with your friends who are also using the same modded game.

-

Q: How can I update Bed Wars Mod APK?

-

A: To update Bed Wars Mod APK, you need to download the latest version of the APK file from the same website where you downloaded it before. Then, you need to uninstall the previous version of the game from your device and install the new version. This way, you can enjoy the latest features and improvements of Bed Wars Mod APK.

-

Q: What are some tips and tricks for playing Bed Wars Mod APK?

-

A: Here are some tips and tricks for playing Bed Wars Mod APK:

- -

-

That's it! You have successfully downloaded and installed Bed Wars Mod APK on your Android device. You have also learned how to play Bed Wars Mod APK and some tips and tricks for playing it. We hope that you enjoyed this article and found it helpful and informative. If you did, please share it with your friends who might also be interested in playing Bed Wars Mod APK. Thank you for reading!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cribbage King The Ultimate Cribbage Game for your iPhone..md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cribbage King The Ultimate Cribbage Game for your iPhone..md deleted file mode 100644 index a2822ad74cfcd6123a83c65867004123ba885bd1..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cribbage King The Ultimate Cribbage Game for your iPhone..md +++ /dev/null @@ -1,118 +0,0 @@ - -

Cribbage Game Free Download for iPhone: How to Play and Enjoy this Classic Card Game

-

If you are looking for a fun and engaging card game that can challenge your mind and improve your skills, you should try cribbage. Cribbage is a classic card game that has been played for centuries by people of all ages and backgrounds. It is easy to learn, but hard to master, and it offers endless possibilities for strategy and variation. In this article, we will show you how to download and play cribbage games on your iPhone, as well as how to improve your skills and strategy in this fascinating game.

-

What is Cribbage and Why Should You Play It?

-

The History and Rules of Cribbage

-

Cribbage is a card game that originated in England in the 17th century. It was invented by Sir John Suckling, a poet and gambler who modified an older game called Noddy. The game is played with a standard 52-card deck and a special board with holes and pegs that are used to keep score. The objective of the game is to be the first player to reach 121 points by making combinations of cards that add up to 15, pairs, runs, flushes, or nobs (the jack of the same suit as the starter card).

-

cribbage game free download for iphone


Download File > https://urlin.us/2uSRUl



-

The game is played by two or three players, or by four players in two teams. Each player is dealt six cards (five cards in a three-player game) and must discard two cards face down to form the crib, which belongs to the dealer. The non-dealer cuts the deck and reveals the top card, which is called the starter or the cut. The players then take turns playing one card each, starting with the non-dealer, and announcing the running total of the cards' values. The cards are worth their face value, except for face cards which are worth 10, and aces which are worth 1. The player who plays a card that makes the total exactly 15 scores two points, called "fifteen two". The player who plays a card that makes the total 31 scores two points, called "thirty-one for two". If a player cannot play a card without going over 31, they say "go" and the other player continues until they reach 31 or cannot play either. The player who played the last card before a go or 31 scores one point, called "one for last".

-

After all the cards have been played, the players count their hands in turn, starting with the non-dealer. The hand consists of four cards plus the starter card. The players score points for any combinations of cards that add up to 15, pairs (two points), three of a kind (six points), four of a kind (twelve points), runs (one point per card), flushes (four points for all five cards of the same suit, or five points if the crib also matches), and nobs (one point for having the jack of the same suit as the starter). The dealer then counts their hand, followed by the crib. The crib can only score points for 15s, pairs, runs, flushes, and nobs.

-

The game continues until one player reaches 121 points or more. The player who reaches 121 points first wins the game. If both players reach 121 points in the age, lowball cribbage, and back up 10 cribbage. Each mode has its own rules and challenges that will test your skills and strategy. -

  • It has a realistic and immersive gameplay experience. You can play with realistic cards, boards, and pegs that are designed with high-quality graphics and animations. You can also hear realistic sound effects and voice overs that add to the atmosphere of the game.
  • -
  • It has a smart and adaptive AI that can adjust to your skill level and style. You can play against different opponents with different personalities, strengths, and weaknesses. You can also customize the AI settings to make the game more easy or difficult.
  • -
  • It has a leaderboard and achievements feature that tracks your progress and performance. You can see your rank, score, wins, losses, skunks, double skunks, and more. You can also unlock various achievements and badges that reward your accomplishments.
  • -
  • It has a multiplayer feature that lets you play with other players online or offline. You can play with your friends or family via Bluetooth, Wi-Fi, or Game Center. You can also play with random players from around the world via Game Center or Facebook.
  • - -

    Ultimate Cribbage is a paid app that you can download from the App Store for $2.99. It is compatible with iPhone, iPad, and iPod touch devices running iOS 9.0 or later.

    -

    Cribbage Classic app for iPad and iPhone
    -Ultimate Cribbage: Classic card game with different modes
    -Cribbage: The Best Card Game by FIOGONIA LIMITED
    -How to play cribbage on your iPhone with friends
    -Cribbage tips and tricks to improve your skills
    -Best cribbage apps for iPhone in 2023
    -Cribbage Club subscription for ad-free gameplay
    -Cribbage Pegboard app to track your score
    -Cribbage variants: Classic, Muggins, and Shotgun
    -Cribbage rules and scoring explained
    -Cribbage Classic settings and features
    -Ultimate Cribbage: Classic reviews and ratings
    -Cribbage: The Best Card Game privacy policy
    -Cribbage online tournaments and leaderboards
    -Cribbage strategy and tactics guide
    -Cribbage Classic discard analyzer bonus feature
    -Ultimate Cribbage: Classic daily challenge rewards
    -Cribbage: The Best Card Game support and feedback
    -Cribbage history and origin
    -Cribbage board designs and customizations
    -Cribbage Classic update and bug fixes
    -Ultimate Cribbage: Classic in-app purchases and prices
    -Cribbage: The Best Card Game download size and compatibility
    -Cribbage offline mode and solo play
    -Cribbage glossary and terminology
    -Cribbage Classic statistics and performance tracking
    -Ultimate Cribbage: Classic app for Mac with Apple M1 chip or later
    -Cribbage: The Best Card Game screenshots and videos
    -Cribbage etiquette and manners
    -Cribbage fun facts and trivia.

    -

    How to Download and Play Ultimate Cribbage

    -

    To download and play Ultimate Cribbage on your iPhone, follow these simple steps:

    -
      -
    1. Open the App Store on your iPhone and search for "Ultimate Cribbage".
    2. -
    3. Tap on the app icon and then tap on "Buy" to purchase and install the app on your device.
    4. -
    5. Once the app is installed, tap on "Open" to launch the app.
    6. -
    7. Select the game mode you want to play: classic cribbage, crash cribbage, cross cribbage, lowball cribbage, or back up 10 cribbage.
    8. -
    9. Select the difficulty level you want to play: easy, medium, hard, or custom.
    10. -
    11. Tap on "Play" to start the game.
    12. -
    13. Enjoy playing cribbage with Ultimate Cribbage!
    14. -
    -

    How to Improve Your Skills and Strategy in Cribbage

    -

    Tips and Tricks for Discarding and Pegging

    -

    One of the most important aspects of cribbage is discarding and pegging. Discarding is the process of choosing which two cards to put in the crib at the beginning of each deal. Pegging is the process of playing cards during the play phase of each deal. Here are some tips and tricks for discarding and pegging:

    - -

    How to Use the Discard Analyzer Feature in Cribbage Apps

    -

    If you want to improve your discarding skills in cribbage , you can use the discard analyzer feature in some cribbage apps. The discard analyzer feature is a tool that can help you decide which cards to discard to the crib based on the expected value of each possible combination. The expected value is the average number of points that you can expect to score from your hand and the crib after the starter card is revealed. The higher the expected value, the better the combination. To use the discard analyzer feature in cribbage apps, follow these simple steps:

      -
    1. After you are dealt your cards, tap on the discard analyzer button on the screen. This will open a window that shows you all the possible combinations of cards that you can discard to the crib, along with their expected values.
    2. -
    3. Compare the expected values of each combination and choose the one that has the highest expected value. This means that this combination will give you the best chance of scoring more points from your hand and the crib.
    4. -
    5. Tap on the cards that you want to discard to the crib and confirm your choice. The app will then show you your remaining hand and the crib.
    6. -
    7. Wait for the starter card to be revealed and continue playing as usual.
    8. -
    -

    The discard analyzer feature is a useful tool that can help you improve your discarding skills in cribbage, but it is not a substitute for your own judgment and intuition. You should also consider other factors, such as your opponent's skill level, your position on the board, and your personal preference, when deciding which cards to discard to the crib.

    -

    How to Practice and Learn from Other Players Online

    -

    Another way to improve your skills and strategy in cribbage is to practice and learn from other players online. Playing online can expose you to different styles and strategies of cribbage, as well as give you feedback and tips on how to play better. Here are some ways to practice and learn from other players online:

    - -

    Practicing and learning from other players online can help you improve your skills and strategy in cribbage, but it is not a substitute for your own experience and practice. You should also play offline with real cards and boards, as well as read books and articles about cribbage.

    -

    Conclusion

    -

    Cribbage is a classic card game that can provide you with hours of fun and entertainment, as well as improve your mental skills and abilities. It is a game that you can play anytime, anywhere, and with anyone. In this article, we have shown you how to download and play cribbage games on your iPhone, as well as how to improve your skills and strategy in this fascinating game. We hope that you have enjoyed reading this article and that you have learned something new about cribbage. Now go ahead and download one of the cribbage apps we have recommended and start playing this amazing game!

    -

    Frequently Asked Questions

    -

    Here are some frequently asked questions about cribbage:

    -
      -
    1. What is the best hand in cribbage?
    2. -

      The best hand in cribbage is 29 points, which consists of three fives of different suits, a jack of the same suit as the starter card, and a five of the same suit as the starter card. This hand scores 12 points for four 15s (5+5+5+J), 12 points for six pairs (5-5, 5-5, 5-5, 5-J, 5-J, J-J), four points for a flush (all five cards of the same suit), and one point for nobs (the jack of the same suit as the starter card). This hand is very rare and can only occur when the dealer has three fives of different suits in their hand and discards them to their own crib.

      -
    3. What is a muggins in cribbage?
    4. -

      A muggins in cribbage is a rule that allows a player to claim any points that their opponent has missed or miscalculated during the scoring phase of each deal. For example, if a player counts their hand as 10 points, but their opponent notices that they actually have 12 points, the opponent can say "muggins" and take the extra two points for themselves. The muggins rule is optional and can be agreed upon or declined by the players before the game starts.

      -
    5. What is the difference between a skunk and a double skunk in cribbage?
    6. -

      A skunk in cribbage is when a player wins the game by 31 or more points over their opponent. A double skunk in cribbage is when a player wins the game by 61 or more points over their opponent. A skunk and a double skunk are considered to be humiliating defeats for the losing player, and they usually result in extra penalties or rewards for the winning player. For example, some players may agree to double or quadruple the stakes of the game if a skunk or a double skunk occurs.

      -
    7. How many cards are in a cribbage deck?
    8. -

      A cribbage deck consists of a standard 52-card deck, which includes 13 cards of each suit (clubs, diamonds, hearts, and spades) and four ranks of each card (ace, 2, 3, 4, 5, 6, 7, 8, 9, 10, jack, queen, and king). However, some variations of cribbage may use different decks, such as a 48-card deck (without the 10s) or a 32-card deck (without the 2s, 3s, 4s, and 5s).

      -
    9. How do you shuffle and cut the cards in cribbage?
    10. -

      To shuffle and cut the cards in cribbage, follow these simple steps:

      -
        -
      1. The dealer shuffles the cards thoroughly and offers them to the non-dealer to cut. The non-dealer cuts the cards by taking a portion of the cards from the top of the deck and placing them on the bottom.
      2. -
      3. The dealer then takes the top card from the bottom portion of the deck and places it face up on top of the deck. This card is called the starter or the cut.
      4. -
      5. The dealer then deals six cards to each player (five cards in a three-player game), one at a time, starting with the non-dealer.
      6. -
      -
    11. What are some common terms and phrases used in cribbage?
    12. -

      Some common terms and phrases used in cribbage are:

      -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 3D Chess Game for PC and Play in Stunning Scenes and Graphics.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 3D Chess Game for PC and Play in Stunning Scenes and Graphics.md deleted file mode 100644 index 6a00dbf8a018957b9978a01cd08c96a4d557d2a9..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download 3D Chess Game for PC and Play in Stunning Scenes and Graphics.md +++ /dev/null @@ -1,153 +0,0 @@ -
      -

      Download 3D Chess Game for PC: A Guide to the Best Options

      -

      If you are a fan of chess and want to enjoy a more immersive and realistic experience, you might want to try playing 3D chess on your PC. 3D chess is a variation of the classic board game that uses three-dimensional graphics and animations to simulate a real chessboard. You can play against the computer, online opponents, or even friends in local multiplayer mode. In this article, we will show you how to download and install 3D chess game for PC, and review some of the best options available on the market.

      -

      Introduction

      -

      What is 3D chess and why play it on PC?

      -

      3D chess is a type of chess game that uses three-dimensional models and effects to create a more realistic and engaging gameplay. Unlike traditional chess games that use flat images or icons, 3D chess games allow you to see the pieces from different angles, zoom in and out, rotate the board, and enjoy various visual effects. Some 3D chess games also have different themes, backgrounds, sounds, and music to enhance the atmosphere.

      -

      download 3d chess game for pc


      Download ->>->>->> https://urlin.us/2uSZuw



      -

      Playing 3D chess on PC has several advantages over playing it on other devices. First of all, you can enjoy better graphics quality and performance on a larger screen. Second, you can use your mouse and keyboard to control the game more easily and precisely. Third, you can access a wider range of options and features, such as online multiplayer, puzzles, rankings, achievements, etc. Fourth, you can save money by downloading free or cheap games from online platforms.

      -

      How to download and install 3D chess game for PC

      -

      There are many ways to download and install 3D chess game for PC, but we will focus on three of the most popular and reliable ones: Chess! on Steam, 3D Chess Game on Microsoft Store, and 3D Chess on Steam. We will compare their features, pros and cons, and how to get them in the following sections.

      -

      Option 1: Chess! on Steam

      -

      Features, pros and cons, and how to get it

      -

      Chess! is an upcoming 3D chess game that is expected to be released in Q2 2023. It is developed by Exeter Game Studios and published by familyplay. It is built with Unreal Engine 5 and integrated with Lichess, one of the largest online chess platforms in the world. Here are some of its features:

      - -

      The pros of Chess! are:

      - -

      The cons of Chess! are:

      - -

      To get Chess!, you need to have a Steam account and a PC that meets the minimum system requirements. You can pre-order the game on Steam for $9.99 and get access to it as soon as it is released. You can also follow the game's development updates on its official website or social media accounts.

      -

      How to install 3D Chess Game on Windows PC or Mac[^1^]
      -Chess! an immersive 3D chess game with Lichess integration[^2^]
      -Get 3D Chess Game from Microsoft Store for free[^3^]
      -3D Chess a unique chess trip with instant duels on Steam[^4^]
      -Best 3D chess games for PC in 2023
      -Download 3D Chess Game APK for Android devices
      -Play 3D Chess online with friends or strangers
      -Learn chess with 3D Chess Game puzzles and challenges
      -Compare 3D Chess Game with other chess apps and software
      -3D Chess Game reviews and ratings from users and experts
      -How to uninstall 3D Chess Game from your PC or Mac
      -3D Chess Game tips and tricks to improve your skills
      -How to customize your board and pieces in 3D Chess Game
      -How to play 3D Chess Game offline or without internet connection
      -How to solve common issues and errors in 3D Chess Game
      -How to update 3D Chess Game to the latest version
      -How to use the free flying camera in 3D Chess Game
      -How to play against advanced AI in 3D Chess Game
      -How to participate in ranked matchmaking in 3D Chess Game
      -How to track your online ELO in 3D Chess Game
      -How to sign up and login to Lichess account in 3D Chess Game
      -How to donate to Lichess charity organization in 3D Chess Game
      -How to enjoy lifelike textures and realistic lighting in 3D Chess Game
      -How to switch between different scenes and locations in 3D Chess Game
      -How to play 3D Chess Game on HoloLens or other VR devices
      -How to download and play 3D Chess Game on Linux or Ubuntu
      -How to stream or record your gameplay of 3D Chess Game
      -How to join or create a chess club in 3D Chess Game
      -How to chat or communicate with other players in 3D Chess Game
      -How to report or block abusive or cheating players in 3D Chess Game
      -How to access the vast library of offline puzzles in 3D Chess Game
      -How to share your achievements and scores of 3D Chess Game on social media
      -How to find and play with your friends in 3D Chess Game
      -How to change the language or sound settings in 3D Chess Game
      -How to enable or disable the relaxing music in 3D Chess Game
      -How to use wildcard characters or anagrams in 3D Chess Game
      -How to checkmate your opponent with only a few moves in 3D Chess Game
      -How to learn from the best with expertly curated chess challenges in 3D Chess Game
      -How to play different variants of chess such as blitz, bullet, rapid, etc. in 3D Chess Game
      -How to watch live games or tournaments of professional chess players in 3D Chess Game
      -How to use the dictionary feature in 3D Chess Game for definitions and synonyms of chess terms
      -How to play the piano or other musical instruments in 3D Chess Game for fun or relaxation
      -How to use the Phoenix Force feature in 3D Chess Game for a fiery and explosive gameplay
      -How to climb up and overcome increasing challenges in Upward mode of 3D Chess Game
      -How to use the speech function in 3D Chess Game for correct pronunciation of chess moves and names
      -How to see your word history or make your own list of favorite words in Dictionary mode of 3D Chess Game
      -How to get the word of the day with interesting and entertaining words in Dictionary mode of 3D Chess Game
      -How to use the solar physics feature in 3D Chess Game for learning about the Sun and its layers
      -How to witness the power of Unreal Engine 5 in transforming the classic game of chess into a breathtaking visual spectacle

      -

      Option 2: 3D Chess Game on Microsoft Store

      -

      Features, pros and cons, and how to get it

      -

      3D Chess Game is a free 3D chess game that is available on Microsoft Store. It is developed by A Trillion Games Ltd and has over 10 million downloads. It is designed for Windows 10 devices, including PCs, tablets, and phones. Here are some of its features:

      - -

      The pros of 3D Chess Game are:

      - -

      The cons of 3D Chess Game are:

      - -

      To get 3D Chess Game, you need to have a Microsoft account and a Windows 10 device that meets the minimum system requirements. You can download the game from Microsoft Store for free and start playing it right away. You can also rate and review the game on the store page or contact the developer for feedback or support.

      -

      Option 3: 3D Chess on Steam

      -

      Features, pros and cons, and how to get it

      -

      3D Chess is another 3D chess game that is available on Steam. It is developed by Bumblebee Games Studio Ltd. It was released in 2016 and has over 1000 reviews. It is designed for Windows PCs only. Here are some of its features:

      - -

      The pros of 3D Chess are:

      - -

      The cons of 3D Chess are:

      - -

      To get 3D Chess, you need to have a Steam account and a Windows PC that meets the minimum system requirements. You can buy the game on Steam for $4.99 and download it to your PC. You can also check out the game's trailer, screenshots, and reviews on its Steam page or official website.

      -

      Comparison table of the three options

      -

      To help you decide which 3D chess game for PC is best for you, we have created a comparison table that summarizes the main features, pros and cons, and prices of the three options we have reviewed. You can see the table below:

      - | Feature | Chess! | 3D Chess Game | 3D Chess | | --- | --- | --- | --- | | Graphics quality | Excellent | Mediocre | Good | | Online platform | Lichess | None | Steam | | Game modes | AI, online, puzzles | AI, online, local | AI, online, local | | Customization options | Scenes, sounds | Board, pieces, background | Board, pieces | | Statistics and achievements | Yes | Yes | Yes | | Price | $9.99 (pre-order) | Free | $4.99 | | Pros | Stunning graphics; Lichess integration; wide range of difficulty levels and puzzles; relaxing scenes and sounds | Free; compatible with Windows 10 devices; simple and intuitive interface; multiple game modes and customizable options | Detailed graphics and realistic shadows; cinematic camera; single-player and multiplayer modes; Steam features | | Cons | Not yet released; might require high-end PC; might not be compatible with older systems or devices | Mediocre graphics; limited online features; occasional bugs or errors | Not free; only compatible with Windows PCs; some negative reviews |

      Conclusion

      -

      Summary of the main points

      -

      In this article, we have shown you how to download and install 3D chess game for PC, and reviewed some of the best options available on the market. We have compared their features, pros and cons, and prices in a comparison table. We have also explained what 3D chess is and why playing it on PC has several advantages over playing it on other devices.

      -

      Recommendation and call to action

      -

      Based on our analysis, we recommend Chess! as the best option for downloading 3D chess game for PC. It offers stunning graphics, Lichess integration, wide range of difficulty levels and puzzles, relaxing scenes and sounds, and more. It is also reasonably priced at $9.99 for pre-ordering. However, if you are looking for a free or simpler option, you can also try 3D Chess Game or 3D Chess on Microsoft Store or Steam respectively.

      -

      If you are interested in playing 3D chess on your PC, you can follow the links below to get your preferred option: - [Chess! on Steam] - [3D Chess Game on Microsoft Store] - [3D Chess on Steam] We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

      -

      FAQs

      -

      Here are some frequently asked questions about downloading 3D chess game

      Here are some frequently asked questions about downloading 3D chess game for PC:

      -
        -
      1. What are the benefits of playing 3D chess on PC?
      2. -

        Playing 3D chess on PC has several benefits, such as better graphics quality and performance, easier and more precise control, wider range of options and features, and saving money by downloading free or cheap games.

        -
      3. What are the main differences between 3D chess and traditional chess?
      4. -

        3D chess is a variation of the classic board game that uses three-dimensional graphics and animations to simulate a real chessboard. It allows you to see the pieces from different angles, zoom in and out, rotate the board, and enjoy various visual effects. Some 3D chess games also have different themes, backgrounds, sounds, and music to enhance the atmosphere.

        -
      5. What are the minimum system requirements for playing 3D chess on PC?
      6. -

        The minimum system requirements for playing 3D chess on PC vary depending on the game you choose. However, a general guideline is that you need a Windows PC with at least 4 GB of RAM, 2 GB of disk space, a dual-core processor, and a graphics card that supports DirectX 11 or higher.

        -
      7. How can I improve my skills in 3D chess?
      8. -

        You can improve your skills in 3D chess by practicing regularly, playing against different opponents, solving puzzles, learning from tutorials or guides, watching videos or streams of other players, and joining online communities or forums.

        -
      9. Where can I find more information or support about 3D chess games?
      10. -

        You can find more information or support about 3D chess games by visiting their official websites or social media accounts, reading their reviews or ratings on online platforms, contacting their developers or publishers, or asking other players or experts.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/9xbuddy Music Download A Review of the Features Benefits and Limitations.md b/spaces/1phancelerku/anime-remove-background/9xbuddy Music Download A Review of the Features Benefits and Limitations.md deleted file mode 100644 index 7fdfea5b6626bc7cd23fd348a2171495d5c2418d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/9xbuddy Music Download A Review of the Features Benefits and Limitations.md +++ /dev/null @@ -1,112 +0,0 @@ -
      -
      - - - -
      -

      How to Download Music from YouTube with 9xbuddy

      -

      Do you love listening to music on YouTube but wish you could save it offline? Do you want to enjoy your favorite songs without ads or interruptions? Do you want to convert YouTube videos into MP3 files easily and quickly?

      -

      If you answered yes to any of these questions, then you need to try 9xbuddy. It is a powerful online tool that lets you download music from YouTube in a matter of seconds. In this article, we will show you how to use 9xbuddy to download music from YouTube, as well as some tips and tricks for making the most of it. We will also compare it with some alternatives that you can try if you want more options.

      -

      9xbuddy music download


      DOWNLOADhttps://jinyurl.com/2uNNUt



      What is 9xbuddy?

      -

      9xbuddy is a free online service that allows you to download any video or audio from any website, including YouTube, Facebook, Instagram, Twitter, Vimeo, Dailymotion, SoundCloud, and more. You can use it to download music from YouTube in MP3 format, as well as other formats like MP4, WEBM, M4A, and more. You can also choose the quality of the download, from low to high. 9xbuddy is fast, easy, and reliable. You don't need to install any software or register an account. You just need to copy and paste the URL of the video or audio you want to download and click on the download button. 9xbuddy will do the rest for you.

      -

      Why Use 9xbuddy to Download Music from YouTube?

      -

      There are many reasons why you might want to use 9xbuddy to download music from YouTube. Here are some of them:

      -
        -
      • You can save your favorite songs offline and listen to them anytime, anywhere, without internet connection or data charges.
      • -
      • You can avoid annoying ads or interruptions that might ruin your listening experience.
      • -
      • You can create your own playlists and mixtapes with the songs you download.
      • -
      • You can transfer the songs to other devices or platforms, such as your phone, tablet, computer, MP3 player, car stereo, etc.
      • -
      • You can edit or remix the songs with other tools or software.
      • -
      • You can share the songs with your friends or family via email, social media, Bluetooth, etc.
      • -
      -

      As you can see, using 9xbuddy to download music from YouTube can give you a lot of benefits and convenience. It can also save you time and money. So why not give it a try?

      -

      How to Use 9xbuddy to Download Music from YouTube?

      -

      Using 9xbuddy to download music from YouTube is very simple and straightforward. You just need to follow these four steps:

      -

      9xbuddy online video downloader
      -9xbuddy mp3 converter
      -9xbuddy youtube to mp3
      -9xbuddy soundcloud downloader
      -9xbuddy alternative sites
      -9xbuddy facebook video download
      -9xbuddy twitter video download
      -9xbuddy dailymotion video download
      -9xbuddy instagram video download
      -9xbuddy tiktok video download
      -9xbuddy vimeo video download
      -9xbuddy spotify music download
      -9xbuddy apple music download
      -9xbuddy amazon music download
      -9xbuddy deezer music download
      -9xbuddy tidal music download
      -9xbuddy pandora music download
      -9xbuddy audiomack music download
      -9xbuddy bandcamp music download
      -9xbuddy soundclick music download
      -9xbuddy mixcloud music download
      -9xbuddy reverbnation music download
      -9xbuddy datpiff music download
      -9xbuddy jamendo music download
      -9xbuddy beatport music download
      -9xbuddy jiosaavn music download
      -9xbuddy gaana music download
      -9xbuddy hungama music download
      -9xbuddy wynk music download
      -9xbuddy shazam music download
      -9xbuddy musixmatch music download
      -9xbuddy tunein music download
      -9xbuddy iheartradio music download
      -9xbuddy last.fm music download
      -9xbuddy napster music download
      -9xbuddy yandex.music download
      -9xbuddy qqmusic download
      -9xbuddy netease cloud music download
      -9xbuddy xiami music download
      -9xbuddy kuwo music download
      -9xbuddy kugou music download
      -9xbuddy migu music download
      -9xbuddy melon music download
      -9xbuddy bugs music download
      -9xbuddy genie music download
      -9xbuddy flo music download
      -9xbuddy vibe music download

      -

      Step 1: Copy the YouTube Video URL

      -

      The first thing you need to do is to find the YouTube video that contains the music you want to download. You can use the YouTube app or website to search for it. Once you find it, you need to copy its URL. The URL is the web address that appears in the address bar of your browser or app. For example, the URL of this video is https://www.youtube.com/watch?v=kJQP7kiw5Fk. To copy it, you can either right-click on it and select "Copy" or highlight it and press Ctrl+C on your keyboard (or Command+C on Mac).

      -

      Step 2: Paste the URL into 9xbuddy

      -

      The next thing you need to do is to go to the 9xbuddy website: https://9xbuddy.org/. You will see a search box where you can paste the URL of the YouTube video. To paste it, you can either right-click on the box and select "Paste" or click on the box and press Ctrl+V on your keyboard (or Command+V on Mac). Then, click on the "Download" button next to the box.

      -

      Step 3: Choose the MP3 Format and Quality

      -

      After clicking on the "Download" button, 9xbuddy will analyze the URL and show you a list of available formats and qualities for downloading. You will see options like MP4 (video), WEBM (video), M4A (audio), MP3 (audio), etc. To download music from YouTube, you need to choose the MP3 format. You can also choose the quality of the MP3 file, from low (64 kbps) to high (320 kbps). The higher the quality, the larger the file size and the better the sound quality. To choose the format and quality, just click on them.

      -

      Step 4: Download the MP3 File

      -

      The final step is to download the MP3 file to your device or cloud storage. You will see a green "Download Now" button next to the format and quality you chose. Click on it and a new tab will open with a countdown timer. Wait for a few seconds until the timer reaches zero and then click on the "Download" button that appears. The MP3 file will start downloading automatically. You can check the progress of the download in your browser or app. Once the download is complete, you can find the MP3 file in your default download folder or location. You can also rename or move it as you wish.

      -

      Tips

      Tips and Tricks for Using 9xbuddy

      -

      To make your experience with 9xbuddy even better, here are some tips and tricks that you can use:

      -

      Tip 1: Use the Bookmarklet or Extension

      -

      If you want to download music from YouTube faster and easier, you can use the bookmarklet or extension that 9xbuddy offers. The bookmarklet is a small piece of code that you can drag and drop to your browser's bookmarks bar. The extension is a small program that you can install to your browser. Both of them allow you to download music from YouTube with just one click, without having to copy and paste the URL or go to the 9xbuddy website. To use the bookmarklet or extension, you need to go to this page: https://9xbuddy.org/tools and follow the instructions there.

      -

      Tip 2: Use the Batch Download Feature

      -

      If you want to download multiple music files at once, you can use the batch download feature that 9xbuddy offers. This feature allows you to enter multiple URLs in one search box and download them all in one go. To use the batch download feature, you need to go to this page: https://9xbuddy.org/batch and follow the instructions there.

      -

      Tip 3: Use the Playlist Download Feature

      -

      If you want to download an entire playlist from YouTube, you can use the playlist download feature that 9xbuddy offers. This feature allows you to enter the URL of a YouTube playlist and download all the videos or audios in it in one go. To use the playlist download feature, you need to go to this page: https://9xbuddy.org/playlist and follow the instructions there.

      -

      Alternatives to 9xbuddy

      -

      Although 9xbuddy is a great tool for downloading music from YouTube, it is not the only one. There are some other websites or tools that can also do the same job. Here are some of them:

      -

      Alternative 1: YTMP3

      -

      YTMP3 is a simple and fast online service that allows you to convert and download YouTube videos into MP3 or MP4 files. You can use it to download music from YouTube in high quality (up to 320 kbps) and without any limitations. You don't need to install any software or register an account. You just need to copy and paste the URL of the YouTube video and click on the convert button. YTMP3 will do the rest for you. You can access YTMP3 here: https://ytmp3.cc/.

      -

      Alternative 2: Snappea

      -

      Snappea is a versatile and powerful online tool that allows you to download videos and audios from various websites, including YouTube, Facebook, Instagram, TikTok, Dailymotion, etc. You can use it to download music from YouTube in various formats (MP3, MP4, M4A, etc.) and qualities (from 144p to 1080p). You don't need to install any software or register an account. You just need to copy and paste the URL of the video or audio and click on the download button. Snappea will do the rest for you. You can access Snappea here: https://www.snappea.com/.

      -

      Alternative 3: MP3FY

      -

      MP3FY is a fast and easy online service that allows you to convert and download any video or audio from any website into MP3 files. You can use it to download music from YouTube in high quality (up to 320 kbps) and without any restrictions. You don't need to install any software or register an account. You just need to copy and paste the URL of the video or audio and click on the convert button. MP3FY will do the rest for you. You can access MP3FY here: https://mp3fy.com/.

      -

      Conclusion

      -

      In conclusion, downloading music from YouTube with 9xbuddy is a simple and convenient way to enjoy your favorite songs offline. You just need to follow four easy steps: copy the URL of the YouTube video, paste it into 9xbuddy, choose the MP3 format and quality, and download the file. You can also use some tips and tricks to enhance your experience with 9xbuddy, such as using the bookmarklet or extension, using the batch download feature, or using the playlist download feature. If you want more options, you can also try some alternatives to 9xbuddy, such as YTMP3, Snappea, or MP3FY.

      -

      We hope this

      We hope this article has helped you learn how to download music from YouTube with 9xbuddy. Now you can enjoy your favorite songs anytime, anywhere, without any hassle. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.

      -

      FAQs

      -

      Here are some frequently asked questions and answers about 9xbuddy and downloading music from YouTube:

      -

      Q: Is 9xbuddy safe and legal?

      -

      A: 9xbuddy is safe and legal to use, as long as you use it for personal and non-commercial purposes. 9xbuddy does not host or store any content on its servers. It only acts as a mediator between the user and the source website. However, you should always respect the intellectual property rights of the original creators and owners of the content. You should not download or distribute any content that is protected by copyright or other laws.

      -

      Q: How long does it take to download music from YouTube with 9xbuddy?

      -

      A: The time it takes to download music from YouTube with 9xbuddy depends on several factors, such as the length and quality of the video, the speed of your internet connection, and the traffic on the website. Generally, it takes a few seconds to a few minutes to download a music file from YouTube with 9xbuddy.

      -

      Q: How many music files can I download from YouTube with 9xbuddy?

      -

      A: There is no limit to how many music files you can download from YouTube with 9xbuddy. You can download as many as you want, as long as you have enough space on your device or cloud storage. However, you should be mindful of the bandwidth and data usage that downloading music files can consume.

      -

      Q: Can I download music from other websites besides YouTube with 9xbuddy?

      -

      A: Yes, you can download music from other websites besides YouTube with 9xbuddy. 9xbuddy supports over 1000 websites, including Facebook, Instagram, Twitter, Vimeo, Dailymotion, SoundCloud, and more. You can use the same steps as downloading music from YouTube with 9xbuddy.

      -

      Q: Can I download music from YouTube with 9xbuddy on my mobile device?

      -

      A: Yes, you can download music from YouTube with 9xbuddy on your mobile device. 9xbuddy is compatible with all devices and platforms, including Android, iOS, Windows, Mac, Linux, etc. You can use any browser or app that supports web browsing to access 9xbuddy and download music from YouTube.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/232labs/VToonify/vtoonify/model/dualstylegan.py b/spaces/232labs/VToonify/vtoonify/model/dualstylegan.py deleted file mode 100644 index 60d9850ad049a2751781871d6ae0c2779ecc863f..0000000000000000000000000000000000000000 --- a/spaces/232labs/VToonify/vtoonify/model/dualstylegan.py +++ /dev/null @@ -1,203 +0,0 @@ -import random -import torch -from torch import nn -from model.stylegan.model import ConvLayer, PixelNorm, EqualLinear, Generator - -class AdaptiveInstanceNorm(nn.Module): - def __init__(self, fin, style_dim=512): - super().__init__() - - self.norm = nn.InstanceNorm2d(fin, affine=False) - self.style = nn.Linear(style_dim, fin * 2) - - self.style.bias.data[:fin] = 1 - self.style.bias.data[fin:] = 0 - - def forward(self, input, style): - style = self.style(style).unsqueeze(2).unsqueeze(3) - gamma, beta = style.chunk(2, 1) - out = self.norm(input) - out = gamma * out + beta - return out - -# modulative residual blocks (ModRes) -class AdaResBlock(nn.Module): - def __init__(self, fin, style_dim=512, dilation=1): # modified - super().__init__() - - self.conv = ConvLayer(fin, fin, 3, dilation=dilation) # modified - self.conv2 = ConvLayer(fin, fin, 3, dilation=dilation) # modified - self.norm = AdaptiveInstanceNorm(fin, style_dim) - self.norm2 = AdaptiveInstanceNorm(fin, style_dim) - - # model initialization - # the convolution filters are set to values close to 0 to produce negligible residual features - self.conv[0].weight.data *= 0.01 - self.conv2[0].weight.data *= 0.01 - - def forward(self, x, s, w=1): - skip = x - if w == 0: - return skip - out = self.conv(self.norm(x, s)) - out = self.conv2(self.norm2(out, s)) - out = out * w + skip - return out - -class DualStyleGAN(nn.Module): - def __init__(self, size, style_dim, n_mlp, channel_multiplier=2, twoRes=True, res_index=6): - super().__init__() - - layers = [PixelNorm()] - for i in range(n_mlp-6): - layers.append(EqualLinear(512, 512, lr_mul=0.01, activation="fused_lrelu")) - # color transform blocks T_c - self.style = nn.Sequential(*layers) - # StyleGAN2 - self.generator = Generator(size, style_dim, n_mlp, channel_multiplier) - # The extrinsic style path - self.res = nn.ModuleList() - self.res_index = res_index//2 * 2 - self.res.append(AdaResBlock(self.generator.channels[2 ** 2])) # for conv1 - for i in range(3, self.generator.log_size + 1): - out_channel = self.generator.channels[2 ** i] - if i < 3 + self.res_index//2: - # ModRes - self.res.append(AdaResBlock(out_channel)) - self.res.append(AdaResBlock(out_channel)) - else: - # structure transform block T_s - self.res.append(EqualLinear(512, 512)) - # FC layer is initialized with identity matrices, meaning no changes to the input latent code - self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01 - self.res.append(EqualLinear(512, 512)) - self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01 - self.res.append(EqualLinear(512, 512)) # for to_rgb7 - self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01 - self.size = self.generator.size - self.style_dim = self.generator.style_dim - self.log_size = self.generator.log_size - self.num_layers = self.generator.num_layers - self.n_latent = self.generator.n_latent - self.channels = self.generator.channels - - def forward( - self, - styles, # intrinsic style code - exstyles, # extrinsic style code - return_latents=False, - return_feat=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - z_plus_latent=False, # intrinsic style code is z+ or z - use_res=True, # whether to use the extrinsic style path - fuse_index=18, # layers > fuse_index do not use the extrinsic style path - interp_weights=[1]*18, # weight vector for style combination of two paths - ): - - if not input_is_latent: - if not z_plus_latent: - styles = [self.generator.style(s) for s in styles] - else: - styles = [self.generator.style(s.reshape(s.shape[0]*s.shape[1], s.shape[2])).reshape(s.shape) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.generator.num_layers - else: - noise = [ - getattr(self.generator.noises, f"noise_{i}") for i in range(self.generator.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.generator.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.generator.n_latent - 1) - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.generator.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - else: - latent = torch.cat([styles[0][:,0:inject_index], styles[1][:,inject_index:]], 1) - - if use_res: - if exstyles.ndim < 3: - resstyles = self.style(exstyles).unsqueeze(1).repeat(1, self.generator.n_latent, 1) - adastyles = exstyles.unsqueeze(1).repeat(1, self.generator.n_latent, 1) - else: - nB, nL, nD = exstyles.shape - resstyles = self.style(exstyles.reshape(nB*nL, nD)).reshape(nB, nL, nD) - adastyles = exstyles - - out = self.generator.input(latent) - out = self.generator.conv1(out, latent[:, 0], noise=noise[0]) - if use_res and fuse_index > 0: - out = self.res[0](out, resstyles[:, 0], interp_weights[0]) - - skip = self.generator.to_rgb1(out, latent[:, 1]) - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.generator.convs[::2], self.generator.convs[1::2], noise[1::2], noise[2::2], self.generator.to_rgbs): - if use_res and fuse_index >= i and i > self.res_index: - out = conv1(out, interp_weights[i] * self.res[i](adastyles[:, i]) + - (1-interp_weights[i]) * latent[:, i], noise=noise1) - else: - out = conv1(out, latent[:, i], noise=noise1) - if use_res and fuse_index >= i and i <= self.res_index: - out = self.res[i](out, resstyles[:, i], interp_weights[i]) - if use_res and fuse_index >= (i+1) and i > self.res_index: - out = conv2(out, interp_weights[i+1] * self.res[i+1](adastyles[:, i+1]) + - (1-interp_weights[i+1]) * latent[:, i+1], noise=noise2) - else: - out = conv2(out, latent[:, i + 1], noise=noise2) - if use_res and fuse_index >= (i+1) and i <= self.res_index: - out = self.res[i+1](out, resstyles[:, i+1], interp_weights[i+1]) - if use_res and fuse_index >= (i+2) and i >= self.res_index-1: - skip = to_rgb(out, interp_weights[i+2] * self.res[i+2](adastyles[:, i+2]) + - (1-interp_weights[i+2]) * latent[:, i + 2], skip) - else: - skip = to_rgb(out, latent[:, i + 2], skip) - i += 2 - if i > self.res_index and return_feat: - return out, skip - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - def make_noise(self): - return self.generator.make_noise() - - def mean_latent(self, n_latent): - return self.generator.mean_latent(n_latent) - - def get_latent(self, input): - return self.generator.style(input) \ No newline at end of file diff --git a/spaces/232labs/VToonify/vtoonify/model/raft/core/__init__.py b/spaces/232labs/VToonify/vtoonify/model/raft/core/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ANDRYHA/FakeNewsClassifier/app.py b/spaces/ANDRYHA/FakeNewsClassifier/app.py deleted file mode 100644 index 9d993f78b38fba1fa0c1c44aae78746972aa4e65..0000000000000000000000000000000000000000 --- a/spaces/ANDRYHA/FakeNewsClassifier/app.py +++ /dev/null @@ -1,71 +0,0 @@ -from transformers import FSMTForConditionalGeneration, FSMTTokenizer -from transformers import AutoModelForSequenceClassification -from transformers import AutoTokenizer -from langdetect import detect -from newspaper import Article -from PIL import Image -import streamlit as st -import requests -import torch - -st.markdown("## Prediction of Fakeness by Given URL") -background = Image.open('logo.jpg') -st.image(background) - -st.markdown(f"### Article URL") -text = st.text_area("Insert some url here", - value="https://en.globes.co.il/en/article-yandex-looks-to-expand-activities-in-israel-1001406519") - -@st.cache(allow_output_mutation=True) -def get_models_and_tokenizers(): - model_name = 'distilbert-base-uncased-finetuned-sst-2-english' - model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2) - model.eval() - tokenizer = AutoTokenizer.from_pretrained(model_name) - model.load_state_dict(torch.load('./model.pth', map_location='cpu')) - - model_name_translator = "facebook/wmt19-ru-en" - tokenizer_translator = FSMTTokenizer.from_pretrained(model_name_translator) - model_translator = FSMTForConditionalGeneration.from_pretrained(model_name_translator) - model_translator.eval() - return model, tokenizer, model_translator, tokenizer_translator - -model, tokenizer, model_translator, tokenizer_translator = get_models_and_tokenizers() - -article = Article(text) -article.download() -article.parse() -concated_text = article.title + '. ' + article.text -lang = detect(concated_text) - -st.markdown(f"### Language detection") - -if lang == 'ru': - st.markdown(f"The language of this article is {lang.upper()} so we translated it!") - with st.spinner('Waiting for translation'): - input_ids = tokenizer_translator.encode(concated_text, - return_tensors="pt", max_length=512, truncation=True) - outputs = model_translator.generate(input_ids) - decoded = tokenizer_translator.decode(outputs[0], skip_special_tokens=True) - st.markdown("### Translated Text") - st.markdown(f"{decoded[:777]}") - concated_text = decoded -else: - st.markdown(f"The language of this article for sure: {lang.upper()}!") - - st.markdown("### Extracted Text") - st.markdown(f"{concated_text[:777]}") - -tokens_info = tokenizer(concated_text, truncation=True, return_tensors="pt") -with torch.no_grad(): - raw_predictions = model(**tokens_info) -softmaxed = int(torch.nn.functional.softmax(raw_predictions.logits[0], dim=0)[1] * 100) -st.markdown("### Fakeness Prediction") -st.progress(softmaxed) -st.markdown(f"This is fake by **{softmaxed}%**!") -if (softmaxed > 70): - st.error('We would not trust this text!') -elif (softmaxed > 40): - st.warning('We are not sure about this text!') -else: - st.success('We would trust this text!') \ No newline at end of file diff --git a/spaces/Abhilashvj/planogram-compliance/models/common.py b/spaces/Abhilashvj/planogram-compliance/models/common.py deleted file mode 100644 index 5b9ca2d051f8f2c9317dcfda3f989d52d232719a..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/models/common.py +++ /dev/null @@ -1,1268 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Common modules -""" - -import ast -import contextlib -import json -import math -import platform -import warnings -import zipfile -from collections import OrderedDict, namedtuple -from copy import copy -from pathlib import Path -from urllib.parse import urlparse - -import cv2 -import numpy as np -import pandas as pd -import requests -import torch -import torch.nn as nn -from IPython.display import display -from PIL import Image -from torch.cuda import amp - -from utils import TryExcept -from utils.dataloaders import exif_transpose, letterbox -from utils.general import ( - LOGGER, - ROOT, - Profile, - check_requirements, - check_suffix, - check_version, - colorstr, - increment_path, - is_notebook, - make_divisible, - non_max_suppression, - scale_boxes, - xywh2xyxy, - xyxy2xywh, - yaml_load, -) -from utils.plots import Annotator, colors, save_one_box -from utils.torch_utils import copy_attr, smart_inference_mode - - -def autopad(k, p=None, d=1): # kernel, padding, dilation - # Pad to 'same' shape outputs - if d > 1: - k = ( - d * (k - 1) + 1 - if isinstance(k, int) - else [d * (x - 1) + 1 for x in k] - ) # actual kernel-size - if p is None: - p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad - return p - - -class Conv(nn.Module): - # Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation) - default_act = nn.SiLU() # default activation - - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True): - super().__init__() - self.conv = nn.Conv2d( - c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False - ) - self.bn = nn.BatchNorm2d(c2) - self.act = ( - self.default_act - if act is True - else act - if isinstance(act, nn.Module) - else nn.Identity() - ) - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def forward_fuse(self, x): - return self.act(self.conv(x)) - - -class DWConv(Conv): - # Depth-wise convolution - def __init__( - self, c1, c2, k=1, s=1, d=1, act=True - ): # ch_in, ch_out, kernel, stride, dilation, activation - super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), d=d, act=act) - - -class DWConvTranspose2d(nn.ConvTranspose2d): - # Depth-wise transpose convolution - def __init__( - self, c1, c2, k=1, s=1, p1=0, p2=0 - ): # ch_in, ch_out, kernel, stride, padding, padding_out - super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2)) - - -class TransformerLayer(nn.Module): - # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance) - def __init__(self, c, num_heads): - super().__init__() - self.q = nn.Linear(c, c, bias=False) - self.k = nn.Linear(c, c, bias=False) - self.v = nn.Linear(c, c, bias=False) - self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads) - self.fc1 = nn.Linear(c, c, bias=False) - self.fc2 = nn.Linear(c, c, bias=False) - - def forward(self, x): - x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x - x = self.fc2(self.fc1(x)) + x - return x - - -class TransformerBlock(nn.Module): - # Vision Transformer https://arxiv.org/abs/2010.11929 - def __init__(self, c1, c2, num_heads, num_layers): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - self.linear = nn.Linear(c2, c2) # learnable position embedding - self.tr = nn.Sequential( - *(TransformerLayer(c2, num_heads) for _ in range(num_layers)) - ) - self.c2 = c2 - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - b, _, w, h = x.shape - p = x.flatten(2).permute(2, 0, 1) - return ( - self.tr(p + self.linear(p)) - .permute(1, 2, 0) - .reshape(b, self.c2, w, h) - ) - - -class Bottleneck(nn.Module): - # Standard bottleneck - def __init__( - self, c1, c2, shortcut=True, g=1, e=0.5 - ): # ch_in, ch_out, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2, 3, 1, g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class BottleneckCSP(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__( - self, c1, c2, n=1, shortcut=True, g=1, e=0.5 - ): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) - self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) - self.cv4 = Conv(2 * c_, c2, 1, 1) - self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) - self.act = nn.SiLU() - self.m = nn.Sequential( - *(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)) - ) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1)))) - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class C3(nn.Module): - # CSP Bottleneck with 3 convolutions - def __init__( - self, c1, c2, n=1, shortcut=True, g=1, e=0.5 - ): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2) - self.m = nn.Sequential( - *(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)) - ) - - def forward(self, x): - return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1)) - - -class C3x(C3): - # C3 module with cross-convolutions - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = nn.Sequential( - *(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)) - ) - - -class C3TR(C3): - # C3 module with TransformerBlock() - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = TransformerBlock(c_, c_, 4, n) - - -class C3SPP(C3): - # C3 module with SPP() - def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) - self.m = SPP(c_, c_, k) - - -class C3Ghost(C3): - # C3 module with GhostBottleneck() - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n))) - - -class SPP(nn.Module): - # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729 - def __init__(self, c1, c2, k=(5, 9, 13)): - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) - self.m = nn.ModuleList( - [nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k] - ) - - def forward(self, x): - x = self.cv1(x) - with warnings.catch_warnings(): - warnings.simplefilter( - "ignore" - ) # suppress torch 1.9.0 max_pool2d() warning - return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) - - -class SPPF(nn.Module): - # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher - def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13)) - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * 4, c2, 1, 1) - self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2) - - def forward(self, x): - x = self.cv1(x) - with warnings.catch_warnings(): - warnings.simplefilter( - "ignore" - ) # suppress torch 1.9.0 max_pool2d() warning - y1 = self.m(x) - y2 = self.m(y1) - return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1)) - - -class Focus(nn.Module): - # Focus wh information into c-space - def __init__( - self, c1, c2, k=1, s=1, p=None, g=1, act=True - ): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act=act) - # self.contract = Contract(gain=2) - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv( - torch.cat( - ( - x[..., ::2, ::2], - x[..., 1::2, ::2], - x[..., ::2, 1::2], - x[..., 1::2, 1::2], - ), - 1, - ) - ) - # return self.conv(self.contract(x)) - - -class GhostConv(nn.Module): - # Ghost Convolution https://github.com/huawei-noah/ghostnet - def __init__( - self, c1, c2, k=1, s=1, g=1, act=True - ): # ch_in, ch_out, kernel, stride, groups - super().__init__() - c_ = c2 // 2 # hidden channels - self.cv1 = Conv(c1, c_, k, s, None, g, act=act) - self.cv2 = Conv(c_, c_, 5, 1, None, c_, act=act) - - def forward(self, x): - y = self.cv1(x) - return torch.cat((y, self.cv2(y)), 1) - - -class GhostBottleneck(nn.Module): - # Ghost Bottleneck https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride - super().__init__() - c_ = c2 // 2 - self.conv = nn.Sequential( - GhostConv(c1, c_, 1, 1), # pw - DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw - GhostConv(c_, c2, 1, 1, act=False), - ) # pw-linear - self.shortcut = ( - nn.Sequential( - DWConv(c1, c1, k, s, act=False), Conv(c1, c2, 1, 1, act=False) - ) - if s == 2 - else nn.Identity() - ) - - def forward(self, x): - return self.conv(x) + self.shortcut(x) - - -class Contract(nn.Module): - # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - ( - b, - c, - h, - w, - ) = ( - x.size() - ) # assert (h / s == 0) and (W / s == 0), 'Indivisible gain' - s = self.gain - x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2) - x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40) - return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40) - - -class Expand(nn.Module): - # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain' - s = self.gain - x = x.view(b, s, s, c // s**2, h, w) # x(1,2,2,16,80,80) - x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2) - return x.view(b, c // s**2, h * s, w * s) # x(1,16,160,160) - - -class Concat(nn.Module): - # Concatenate a list of tensors along dimension - def __init__(self, dimension=1): - super().__init__() - self.d = dimension - - def forward(self, x): - return torch.cat(x, self.d) - - -class DetectMultiBackend(nn.Module): - # YOLOv5 MultiBackend class for python inference on various backends - def __init__( - self, - weights="yolov5s.pt", - device=torch.device("cpu"), - dnn=False, - data=None, - fp16=False, - fuse=True, - ): - # Usage: - # PyTorch: weights = *.pt - # TorchScript: *.torchscript - # ONNX Runtime: *.onnx - # ONNX OpenCV DNN: *.onnx --dnn - # OpenVINO: *_openvino_model - # CoreML: *.mlmodel - # TensorRT: *.engine - # TensorFlow SavedModel: *_saved_model - # TensorFlow GraphDef: *.pb - # TensorFlow Lite: *.tflite - # TensorFlow Edge TPU: *_edgetpu.tflite - # PaddlePaddle: *_paddle_model - from models.experimental import ( # scoped to avoid circular import - attempt_download, - attempt_load, - ) - - super().__init__() - w = str(weights[0] if isinstance(weights, list) else weights) - ( - pt, - jit, - onnx, - xml, - engine, - coreml, - saved_model, - pb, - tflite, - edgetpu, - tfjs, - paddle, - triton, - ) = self._model_type(w) - fp16 &= pt or jit or onnx or engine # FP16 - nhwc = ( - coreml or saved_model or pb or tflite or edgetpu - ) # BHWC formats (vs torch BCWH) - stride = 32 # default stride - cuda = torch.cuda.is_available() and device.type != "cpu" # use CUDA - if not (pt or triton): - w = attempt_download(w) # download if not local - - if pt: # PyTorch - model = attempt_load( - weights if isinstance(weights, list) else w, - device=device, - inplace=True, - fuse=fuse, - ) - stride = max(int(model.stride.max()), 32) # model stride - names = ( - model.module.names if hasattr(model, "module") else model.names - ) # get class names - model.half() if fp16 else model.float() - self.model = ( - model # explicitly assign for to(), cpu(), cuda(), half() - ) - elif jit: # TorchScript - LOGGER.info(f"Loading {w} for TorchScript inference...") - extra_files = {"config.txt": ""} # model metadata - model = torch.jit.load( - w, _extra_files=extra_files, map_location=device - ) - model.half() if fp16 else model.float() - if extra_files["config.txt"]: # load metadata dict - d = json.loads( - extra_files["config.txt"], - object_hook=lambda d: { - int(k) if k.isdigit() else k: v for k, v in d.items() - }, - ) - stride, names = int(d["stride"]), d["names"] - elif dnn: # ONNX OpenCV DNN - LOGGER.info(f"Loading {w} for ONNX OpenCV DNN inference...") - check_requirements("opencv-python>=4.5.4") - net = cv2.dnn.readNetFromONNX(w) - elif onnx: # ONNX Runtime - LOGGER.info(f"Loading {w} for ONNX Runtime inference...") - check_requirements( - ("onnx", "onnxruntime-gpu" if cuda else "onnxruntime") - ) - import onnxruntime - - providers = ( - ["CUDAExecutionProvider", "CPUExecutionProvider"] - if cuda - else ["CPUExecutionProvider"] - ) - session = onnxruntime.InferenceSession(w, providers=providers) - output_names = [x.name for x in session.get_outputs()] - meta = session.get_modelmeta().custom_metadata_map # metadata - if "stride" in meta: - stride, names = int(meta["stride"]), eval(meta["names"]) - elif xml: # OpenVINO - LOGGER.info(f"Loading {w} for OpenVINO inference...") - check_requirements( - "openvino" - ) # requires openvino-dev: https://pypi.org/project/openvino-dev/ - from openvino.runtime import Core, Layout, get_batch - - ie = Core() - if not Path(w).is_file(): # if not *.xml - w = next( - Path(w).glob("*.xml") - ) # get *.xml file from *_openvino_model dir - network = ie.read_model( - model=w, weights=Path(w).with_suffix(".bin") - ) - if network.get_parameters()[0].get_layout().empty: - network.get_parameters()[0].set_layout(Layout("NCHW")) - batch_dim = get_batch(network) - if batch_dim.is_static: - batch_size = batch_dim.get_length() - executable_network = ie.compile_model( - network, device_name="CPU" - ) # device_name="MYRIAD" for Intel NCS2 - stride, names = self._load_metadata( - Path(w).with_suffix(".yaml") - ) # load metadata - elif engine: # TensorRT - LOGGER.info(f"Loading {w} for TensorRT inference...") - import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download - - check_version( - trt.__version__, "7.0.0", hard=True - ) # require tensorrt>=7.0.0 - if device.type == "cpu": - device = torch.device("cuda:0") - Binding = namedtuple( - "Binding", ("name", "dtype", "shape", "data", "ptr") - ) - logger = trt.Logger(trt.Logger.INFO) - with open(w, "rb") as f, trt.Runtime(logger) as runtime: - model = runtime.deserialize_cuda_engine(f.read()) - context = model.create_execution_context() - bindings = OrderedDict() - output_names = [] - fp16 = False # default updated below - dynamic = False - for i in range(model.num_bindings): - name = model.get_binding_name(i) - dtype = trt.nptype(model.get_binding_dtype(i)) - if model.binding_is_input(i): - if -1 in tuple(model.get_binding_shape(i)): # dynamic - dynamic = True - context.set_binding_shape( - i, tuple(model.get_profile_shape(0, i)[2]) - ) - if dtype == np.float16: - fp16 = True - else: # output - output_names.append(name) - shape = tuple(context.get_binding_shape(i)) - im = torch.from_numpy(np.empty(shape, dtype=dtype)).to(device) - bindings[name] = Binding( - name, dtype, shape, im, int(im.data_ptr()) - ) - binding_addrs = OrderedDict( - (n, d.ptr) for n, d in bindings.items() - ) - batch_size = bindings["images"].shape[ - 0 - ] # if dynamic, this is instead max batch size - elif coreml: # CoreML - LOGGER.info(f"Loading {w} for CoreML inference...") - import coremltools as ct - - model = ct.models.MLModel(w) - elif saved_model: # TF SavedModel - LOGGER.info(f"Loading {w} for TensorFlow SavedModel inference...") - import tensorflow as tf - - keras = False # assume TF1 saved_model - model = ( - tf.keras.models.load_model(w) - if keras - else tf.saved_model.load(w) - ) - elif ( - pb - ): # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt - LOGGER.info(f"Loading {w} for TensorFlow GraphDef inference...") - import tensorflow as tf - - def wrap_frozen_graph(gd, inputs, outputs): - x = tf.compat.v1.wrap_function( - lambda: tf.compat.v1.import_graph_def(gd, name=""), [] - ) # wrapped - ge = x.graph.as_graph_element - return x.prune( - tf.nest.map_structure(ge, inputs), - tf.nest.map_structure(ge, outputs), - ) - - def gd_outputs(gd): - name_list, input_list = [], [] - for ( - node - ) in gd.node: # tensorflow.core.framework.node_def_pb2.NodeDef - name_list.append(node.name) - input_list.extend(node.input) - return sorted( - f"{x}:0" - for x in list(set(name_list) - set(input_list)) - if not x.startswith("NoOp") - ) - - gd = tf.Graph().as_graph_def() # TF GraphDef - with open(w, "rb") as f: - gd.ParseFromString(f.read()) - frozen_func = wrap_frozen_graph( - gd, inputs="x:0", outputs=gd_outputs(gd) - ) - elif ( - tflite or edgetpu - ): # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python - try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu - from tflite_runtime.interpreter import Interpreter, load_delegate - except ImportError: - import tensorflow as tf - - Interpreter, load_delegate = ( - tf.lite.Interpreter, - tf.lite.experimental.load_delegate, - ) - if ( - edgetpu - ): # TF Edge TPU https://coral.ai/software/#edgetpu-runtime - LOGGER.info( - f"Loading {w} for TensorFlow Lite Edge TPU inference..." - ) - delegate = { - "Linux": "libedgetpu.so.1", - "Darwin": "libedgetpu.1.dylib", - "Windows": "edgetpu.dll", - }[platform.system()] - interpreter = Interpreter( - model_path=w, - experimental_delegates=[load_delegate(delegate)], - ) - else: # TFLite - LOGGER.info(f"Loading {w} for TensorFlow Lite inference...") - interpreter = Interpreter(model_path=w) # load TFLite model - interpreter.allocate_tensors() # allocate - input_details = interpreter.get_input_details() # inputs - output_details = interpreter.get_output_details() # outputs - # load metadata - with contextlib.suppress(zipfile.BadZipFile): - with zipfile.ZipFile(w, "r") as model: - meta_file = model.namelist()[0] - meta = ast.literal_eval( - model.read(meta_file).decode("utf-8") - ) - stride, names = int(meta["stride"]), meta["names"] - elif tfjs: # TF.js - raise NotImplementedError( - "ERROR: YOLOv5 TF.js inference is not supported" - ) - elif paddle: # PaddlePaddle - LOGGER.info(f"Loading {w} for PaddlePaddle inference...") - check_requirements("paddlepaddle-gpu" if cuda else "paddlepaddle") - import paddle.inference as pdi - - if not Path(w).is_file(): # if not *.pdmodel - w = next( - Path(w).rglob("*.pdmodel") - ) # get *.pdmodel file from *_paddle_model dir - weights = Path(w).with_suffix(".pdiparams") - config = pdi.Config(str(w), str(weights)) - if cuda: - config.enable_use_gpu( - memory_pool_init_size_mb=2048, device_id=0 - ) - predictor = pdi.create_predictor(config) - input_handle = predictor.get_input_handle( - predictor.get_input_names()[0] - ) - output_names = predictor.get_output_names() - elif triton: # NVIDIA Triton Inference Server - LOGGER.info(f"Using {w} as Triton Inference Server...") - check_requirements("tritonclient[all]") - from utils.triton import TritonRemoteModel - - model = TritonRemoteModel(url=w) - nhwc = model.runtime.startswith("tensorflow") - else: - raise NotImplementedError(f"ERROR: {w} is not a supported format") - - # class names - if "names" not in locals(): - names = ( - yaml_load(data)["names"] - if data - else {i: f"class{i}" for i in range(999)} - ) - if names[0] == "n01440764" and len(names) == 1000: # ImageNet - names = yaml_load(ROOT / "data/ImageNet.yaml")[ - "names" - ] # human-readable names - - self.__dict__.update(locals()) # assign all variables to self - - def forward(self, im, augment=False, visualize=False): - # YOLOv5 MultiBackend inference - b, ch, h, w = im.shape # batch, channel, height, width - if self.fp16 and im.dtype != torch.float16: - im = im.half() # to FP16 - if self.nhwc: - im = im.permute( - 0, 2, 3, 1 - ) # torch BCHW to numpy BHWC shape(1,320,192,3) - - if self.pt: # PyTorch - y = ( - self.model(im, augment=augment, visualize=visualize) - if augment or visualize - else self.model(im) - ) - elif self.jit: # TorchScript - y = self.model(im) - elif self.dnn: # ONNX OpenCV DNN - im = im.cpu().numpy() # torch to numpy - self.net.setInput(im) - y = self.net.forward() - elif self.onnx: # ONNX Runtime - im = im.cpu().numpy() # torch to numpy - y = self.session.run( - self.output_names, {self.session.get_inputs()[0].name: im} - ) - elif self.xml: # OpenVINO - im = im.cpu().numpy() # FP32 - y = list(self.executable_network([im]).values()) - elif self.engine: # TensorRT - if self.dynamic and im.shape != self.bindings["images"].shape: - i = self.model.get_binding_index("images") - self.context.set_binding_shape( - i, im.shape - ) # reshape if dynamic - self.bindings["images"] = self.bindings["images"]._replace( - shape=im.shape - ) - for name in self.output_names: - i = self.model.get_binding_index(name) - self.bindings[name].data.resize_( - tuple(self.context.get_binding_shape(i)) - ) - s = self.bindings["images"].shape - assert ( - im.shape == s - ), f"input size {im.shape} {'>' if self.dynamic else 'not equal to'} max model size {s}" - self.binding_addrs["images"] = int(im.data_ptr()) - self.context.execute_v2(list(self.binding_addrs.values())) - y = [self.bindings[x].data for x in sorted(self.output_names)] - elif self.coreml: # CoreML - im = im.cpu().numpy() - im = Image.fromarray((im[0] * 255).astype("uint8")) - # im = im.resize((192, 320), Image.ANTIALIAS) - y = self.model.predict( - {"image": im} - ) # coordinates are xywh normalized - if "confidence" in y: - box = xywh2xyxy( - y["coordinates"] * [[w, h, w, h]] - ) # xyxy pixels - conf, cls = y["confidence"].max(1), y["confidence"].argmax( - 1 - ).astype(np.float) - y = np.concatenate( - (box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1 - ) - else: - y = list( - reversed(y.values()) - ) # reversed for segmentation models (pred, proto) - elif self.paddle: # PaddlePaddle - im = im.cpu().numpy().astype(np.float32) - self.input_handle.copy_from_cpu(im) - self.predictor.run() - y = [ - self.predictor.get_output_handle(x).copy_to_cpu() - for x in self.output_names - ] - elif self.triton: # NVIDIA Triton Inference Server - y = self.model(im) - else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU) - im = im.cpu().numpy() - if self.saved_model: # SavedModel - y = ( - self.model(im, training=False) - if self.keras - else self.model(im) - ) - elif self.pb: # GraphDef - y = self.frozen_func(x=self.tf.constant(im)) - else: # Lite or Edge TPU - input = self.input_details[0] - int8 = ( - input["dtype"] == np.uint8 - ) # is TFLite quantized uint8 model - if int8: - scale, zero_point = input["quantization"] - im = (im / scale + zero_point).astype(np.uint8) # de-scale - self.interpreter.set_tensor(input["index"], im) - self.interpreter.invoke() - y = [] - for output in self.output_details: - x = self.interpreter.get_tensor(output["index"]) - if int8: - scale, zero_point = output["quantization"] - x = ( - x.astype(np.float32) - zero_point - ) * scale # re-scale - y.append(x) - y = [x if isinstance(x, np.ndarray) else x.numpy() for x in y] - y[0][..., :4] *= [w, h, w, h] # xywh normalized to pixels - - if isinstance(y, (list, tuple)): - return ( - self.from_numpy(y[0]) - if len(y) == 1 - else [self.from_numpy(x) for x in y] - ) - else: - return self.from_numpy(y) - - def from_numpy(self, x): - return ( - torch.from_numpy(x).to(self.device) - if isinstance(x, np.ndarray) - else x - ) - - def warmup(self, imgsz=(1, 3, 640, 640)): - # Warmup model by running inference once - warmup_types = ( - self.pt, - self.jit, - self.onnx, - self.engine, - self.saved_model, - self.pb, - self.triton, - ) - if any(warmup_types) and (self.device.type != "cpu" or self.triton): - im = torch.empty( - *imgsz, - dtype=torch.half if self.fp16 else torch.float, - device=self.device, - ) # input - for _ in range(2 if self.jit else 1): # - self.forward(im) # warmup - - @staticmethod - def _model_type(p="path/to/model.pt"): - # Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx - # types = [pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle] - from export import export_formats - from utils.downloads import is_url - - sf = list(export_formats().Suffix) # export suffixes - if not is_url(p, check=False): - check_suffix(p, sf) # checks - url = urlparse(p) # if url may be Triton inference server - types = [s in Path(p).name for s in sf] - types[8] &= not types[9] # tflite &= not edgetpu - triton = not any(types) and all( - [any(s in url.scheme for s in ["http", "grpc"]), url.netloc] - ) - return types + [triton] - - @staticmethod - def _load_metadata(f=Path("path/to/meta.yaml")): - # Load metadata from meta.yaml if it exists - if f.exists(): - d = yaml_load(f) - return d["stride"], d["names"] # assign stride, names - return None, None - - -class AutoShape(nn.Module): - # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS - conf = 0.25 # NMS confidence threshold - iou = 0.45 # NMS IoU threshold - agnostic = False # NMS class-agnostic - multi_label = False # NMS multiple labels per box - classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs - max_det = 1000 # maximum number of detections per image - amp = False # Automatic Mixed Precision (AMP) inference - - def __init__(self, model, verbose=True): - super().__init__() - if verbose: - LOGGER.info("Adding AutoShape... ") - copy_attr( - self, - model, - include=("yaml", "nc", "hyp", "names", "stride", "abc"), - exclude=(), - ) # copy attributes - self.dmb = isinstance( - model, DetectMultiBackend - ) # DetectMultiBackend() instance - self.pt = not self.dmb or model.pt # PyTorch model - self.model = model.eval() - if self.pt: - m = ( - self.model.model.model[-1] - if self.dmb - else self.model.model[-1] - ) # Detect() - m.inplace = ( - False # Detect.inplace=False for safe multithread inference - ) - m.export = True # do not output loss values - - def _apply(self, fn): - # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers - self = super()._apply(fn) - if self.pt: - m = ( - self.model.model.model[-1] - if self.dmb - else self.model.model[-1] - ) # Detect() - m.stride = fn(m.stride) - m.grid = list(map(fn, m.grid)) - if isinstance(m.anchor_grid, list): - m.anchor_grid = list(map(fn, m.anchor_grid)) - return self - - @smart_inference_mode() - def forward(self, ims, size=640, augment=False, profile=False): - # Inference from various sources. For size(height=640, width=1280), RGB images example inputs are: - # file: ims = 'data/images/zidane.jpg' # str or PosixPath - # URI: = 'https://ultralytics.com/images/zidane.jpg' - # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3) - # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3) - # numpy: = np.zeros((640,1280,3)) # HWC - # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values) - # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - - dt = (Profile(), Profile(), Profile()) - with dt[0]: - if isinstance(size, int): # expand - size = (size, size) - p = ( - next(self.model.parameters()) - if self.pt - else torch.empty(1, device=self.model.device) - ) # param - autocast = self.amp and ( - p.device.type != "cpu" - ) # Automatic Mixed Precision (AMP) inference - if isinstance(ims, torch.Tensor): # torch - with amp.autocast(autocast): - return self.model( - ims.to(p.device).type_as(p), augment=augment - ) # inference - - # Pre-process - n, ims = ( - (len(ims), list(ims)) - if isinstance(ims, (list, tuple)) - else (1, [ims]) - ) # number, list of images - shape0, shape1, files = ( - [], - [], - [], - ) # image and inference shapes, filenames - for i, im in enumerate(ims): - f = f"image{i}" # filename - if isinstance(im, (str, Path)): # filename or uri - im, f = ( - Image.open( - requests.get(im, stream=True).raw - if str(im).startswith("http") - else im - ), - im, - ) - im = np.asarray(exif_transpose(im)) - elif isinstance(im, Image.Image): # PIL Image - im, f = ( - np.asarray(exif_transpose(im)), - getattr(im, "filename", f) or f, - ) - files.append(Path(f).with_suffix(".jpg").name) - if im.shape[0] < 5: # image in CHW - im = im.transpose( - (1, 2, 0) - ) # reverse dataloader .transpose(2, 0, 1) - im = ( - im[..., :3] - if im.ndim == 3 - else cv2.cvtColor(im, cv2.COLOR_GRAY2BGR) - ) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = max(size) / max(s) # gain - shape1.append([int(y * g) for y in s]) - ims[i] = ( - im if im.data.contiguous else np.ascontiguousarray(im) - ) # update - shape1 = [ - make_divisible(x, self.stride) for x in np.array(shape1).max(0) - ] # inf shape - x = [letterbox(im, shape1, auto=False)[0] for im in ims] # pad - x = np.ascontiguousarray( - np.array(x).transpose((0, 3, 1, 2)) - ) # stack and BHWC to BCHW - x = ( - torch.from_numpy(x).to(p.device).type_as(p) / 255 - ) # uint8 to fp16/32 - - with amp.autocast(autocast): - # Inference - with dt[1]: - y = self.model(x, augment=augment) # forward - - # Post-process - with dt[2]: - y = non_max_suppression( - y if self.dmb else y[0], - self.conf, - self.iou, - self.classes, - self.agnostic, - self.multi_label, - max_det=self.max_det, - ) # NMS - for i in range(n): - scale_boxes(shape1, y[i][:, :4], shape0[i]) - - return Detections(ims, y, files, dt, self.names, x.shape) - - -class Detections: - # YOLOv5 detections class for inference results - def __init__( - self, ims, pred, files, times=(0, 0, 0), names=None, shape=None - ): - super().__init__() - d = pred[0].device # device - gn = [ - torch.tensor( - [*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d - ) - for im in ims - ] # normalizations - self.ims = ims # list of images as numpy arrays - self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) - self.names = names # class names - self.files = files # image filenames - self.times = times # profiling times - self.xyxy = pred # xyxy pixels - self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels - self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized - self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized - self.n = len(self.pred) # number of images (batch size) - self.t = tuple(x.t / self.n * 1e3 for x in times) # timestamps (ms) - self.s = tuple(shape) # inference BCHW shape - - def _run( - self, - pprint=False, - show=False, - save=False, - crop=False, - render=False, - labels=True, - save_dir=Path(""), - ): - s, crops = "", [] - for i, (im, pred) in enumerate(zip(self.ims, self.pred)): - s += f"\nimage {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} " # string - if pred.shape[0]: - for c in pred[:, -1].unique(): - n = (pred[:, -1] == c).sum() # detections per class - s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string - s = s.rstrip(", ") - if show or save or render or crop: - annotator = Annotator(im, example=str(self.names)) - for *box, conf, cls in reversed( - pred - ): # xyxy, confidence, class - label = f"{self.names[int(cls)]} {conf:.2f}" - if crop: - file = ( - save_dir - / "crops" - / self.names[int(cls)] - / self.files[i] - if save - else None - ) - crops.append( - { - "box": box, - "conf": conf, - "cls": cls, - "label": label, - "im": save_one_box( - box, im, file=file, save=save - ), - } - ) - else: # all others - annotator.box_label( - box, label if labels else "", color=colors(cls) - ) - im = annotator.im - else: - s += "(no detections)" - - im = ( - Image.fromarray(im.astype(np.uint8)) - if isinstance(im, np.ndarray) - else im - ) # from np - if show: - display(im) if is_notebook() else im.show(self.files[i]) - if save: - f = self.files[i] - im.save(save_dir / f) # save - if i == self.n - 1: - LOGGER.info( - f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}" - ) - if render: - self.ims[i] = np.asarray(im) - if pprint: - s = s.lstrip("\n") - return ( - f"{s}\nSpeed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {self.s}" - % self.t - ) - if crop: - if save: - LOGGER.info(f"Saved results to {save_dir}\n") - return crops - - @TryExcept("Showing images is not supported in this environment") - def show(self, labels=True): - self._run(show=True, labels=labels) # show results - - def save(self, labels=True, save_dir="runs/detect/exp", exist_ok=False): - save_dir = increment_path( - save_dir, exist_ok, mkdir=True - ) # increment save_dir - self._run(save=True, labels=labels, save_dir=save_dir) # save results - - def crop(self, save=True, save_dir="runs/detect/exp", exist_ok=False): - save_dir = ( - increment_path(save_dir, exist_ok, mkdir=True) if save else None - ) - return self._run( - crop=True, save=save, save_dir=save_dir - ) # crop results - - def render(self, labels=True): - self._run(render=True, labels=labels) # render results - return self.ims - - def pandas(self): - # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0]) - new = copy(self) # return copy - ca = ( - "xmin", - "ymin", - "xmax", - "ymax", - "confidence", - "class", - "name", - ) # xyxy columns - cb = ( - "xcenter", - "ycenter", - "width", - "height", - "confidence", - "class", - "name", - ) # xywh columns - for k, c in zip(["xyxy", "xyxyn", "xywh", "xywhn"], [ca, ca, cb, cb]): - a = [ - [ - x[:5] + [int(x[5]), self.names[int(x[5])]] - for x in x.tolist() - ] - for x in getattr(self, k) - ] # update - setattr(new, k, [pd.DataFrame(x, columns=c) for x in a]) - return new - - def tolist(self): - # return a list of Detections objects, i.e. 'for result in results.tolist():' - r = range(self.n) # iterable - x = [ - Detections( - [self.ims[i]], - [self.pred[i]], - [self.files[i]], - self.times, - self.names, - self.s, - ) - for i in r - ] - # for d in x: - # for k in ['ims', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: - # setattr(d, k, getattr(d, k)[0]) # pop out of list - return x - - def print(self): - LOGGER.info(self.__str__()) - - def __len__(self): # override len(results) - return self.n - - def __str__(self): # override print(results) - return self._run(pprint=True) # print results - - def __repr__(self): - return f"YOLOv5 {self.__class__} instance\n" + self.__str__() - - -class Proto(nn.Module): - # YOLOv5 mask Proto module for segmentation models - def __init__( - self, c1, c_=256, c2=32 - ): # ch_in, number of protos, number of masks - super().__init__() - self.cv1 = Conv(c1, c_, k=3) - self.upsample = nn.Upsample(scale_factor=2, mode="nearest") - self.cv2 = Conv(c_, c_, k=3) - self.cv3 = Conv(c_, c2) - - def forward(self, x): - return self.cv3(self.cv2(self.upsample(self.cv1(x)))) - - -class Classify(nn.Module): - # YOLOv5 classification head, i.e. x(b,c1,20,20) to x(b,c2) - def __init__( - self, c1, c2, k=1, s=1, p=None, g=1 - ): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__() - c_ = 1280 # efficientnet_b0 size - self.conv = Conv(c1, c_, k, s, autopad(k, p), g) - self.pool = nn.AdaptiveAvgPool2d(1) # to x(b,c_,1,1) - self.drop = nn.Dropout(p=0.0, inplace=True) - self.linear = nn.Linear(c_, c2) # to x(b,c2) - - def forward(self, x): - if isinstance(x, list): - x = torch.cat(x, 1) - return self.linear(self.drop(self.pool(self.conv(x)).flatten(1))) diff --git a/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/restapi.py b/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/restapi.py deleted file mode 100644 index 1674bda0d96db810736e3ded29c867a94d6db9e9..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/restapi.py +++ /dev/null @@ -1,61 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Run a Flask REST API exposing one or more YOLOv5s models -""" - -import argparse -import io - -import torch -from flask import Flask, request -from PIL import Image - -app = Flask(__name__) -models = {} - -DETECTION_URL = "/v1/object-detection/" - - -@app.route(DETECTION_URL, methods=["POST"]) -def predict(model): - if request.method != "POST": - return - - if request.files.get("image"): - # Method 1 - # with request.files["image"] as f: - # im = Image.open(io.BytesIO(f.read())) - - # Method 2 - im_file = request.files["image"] - im_bytes = im_file.read() - im = Image.open(io.BytesIO(im_bytes)) - - if model in models: - results = models[model]( - im, size=640 - ) # reduce size=320 for faster inference - return results.pandas().xyxy[0].to_json(orient="records") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Flask API exposing YOLOv5 model" - ) - parser.add_argument("--port", default=5000, type=int, help="port number") - parser.add_argument( - "--model", - nargs="+", - default=["yolov5s"], - help="model(s) to run, i.e. --model yolov5n yolov5s", - ) - opt = parser.parse_args() - - for m in opt.model: - models[m] = torch.hub.load( - "ultralytics/yolov5", m, force_reload=True, skip_validation=True - ) - - app.run( - host="0.0.0.0", port=opt.port - ) # debug=True causes Restarting with stat diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/overview.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/overview.md deleted file mode 100644 index a8f4dcd4d0b06023ff3c4526416cc7947f271e15..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/overview.md +++ /dev/null @@ -1,92 +0,0 @@ - - -# Schedulers - -Diffusers contains multiple pre-built schedule functions for the diffusion process. - -## What is a scheduler? - -The schedule functions, denoted *Schedulers* in the library take in the output of a trained model, a sample which the diffusion process is iterating on, and a timestep to return a denoised sample. That's why schedulers may also be called *Samplers* in other diffusion models implementations. - -- Schedulers define the methodology for iteratively adding noise to an image or for updating a sample based on model outputs. - - adding noise in different manners represent the algorithmic processes to train a diffusion model by adding noise to images. - - for inference, the scheduler defines how to update a sample based on an output from a pretrained model. -- Schedulers are often defined by a *noise schedule* and an *update rule* to solve the differential equation solution. - -### Discrete versus continuous schedulers - -All schedulers take in a timestep to predict the updated version of the sample being diffused. -The timesteps dictate where in the diffusion process the step is, where data is generated by iterating forward in time and inference is executed by propagating backwards through timesteps. -Different algorithms use timesteps that can be discrete (accepting `int` inputs), such as the [`DDPMScheduler`] or [`PNDMScheduler`], or continuous (accepting `float` inputs), such as the score-based schedulers [`ScoreSdeVeScheduler`] or [`ScoreSdeVpScheduler`]. - -## Designing Re-usable schedulers - -The core design principle between the schedule functions is to be model, system, and framework independent. -This allows for rapid experimentation and cleaner abstractions in the code, where the model prediction is separated from the sample update. -To this end, the design of schedulers is such that: - -- Schedulers can be used interchangeably between diffusion models in inference to find the preferred trade-off between speed and generation quality. -- Schedulers are currently by default in PyTorch, but are designed to be framework independent (partial Jax support currently exists). -- Many diffusion pipelines, such as [`StableDiffusionPipeline`] and [`DiTPipeline`] can use any of [`KarrasDiffusionSchedulers`] - -## Schedulers Summary - -The following table summarizes all officially supported schedulers, their corresponding paper - -| Scheduler | Paper | -|---|---| -| [ddim](./ddim) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) | -| [ddim_inverse](./ddim_inverse) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) | -| [ddpm](./ddpm) | [**Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2006.11239) | -| [deis](./deis) | [**DEISMultistepScheduler**](https://arxiv.org/abs/2204.13902) | -| [singlestep_dpm_solver](./singlestep_dpm_solver) | [**Singlestep DPM-Solver**](https://arxiv.org/abs/2206.00927) | -| [multistep_dpm_solver](./multistep_dpm_solver) | [**Multistep DPM-Solver**](https://arxiv.org/abs/2206.00927) | -| [heun](./heun) | [**Heun scheduler inspired by Karras et. al paper**](https://arxiv.org/abs/2206.00364) | -| [dpm_discrete](./dpm_discrete) | [**DPM Discrete Scheduler inspired by Karras et. al paper**](https://arxiv.org/abs/2206.00364) | -| [dpm_discrete_ancestral](./dpm_discrete_ancestral) | [**DPM Discrete Scheduler with ancestral sampling inspired by Karras et. al paper**](https://arxiv.org/abs/2206.00364) | -| [stochastic_karras_ve](./stochastic_karras_ve) | [**Variance exploding, stochastic sampling from Karras et. al**](https://arxiv.org/abs/2206.00364) | -| [lms_discrete](./lms_discrete) | [**Linear multistep scheduler for discrete beta schedules**](https://arxiv.org/abs/2206.00364) | -| [pndm](./pndm) | [**Pseudo numerical methods for diffusion models (PNDM)**](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181) | -| [score_sde_ve](./score_sde_ve) | [**variance exploding stochastic differential equation (VE-SDE) scheduler**](https://arxiv.org/abs/2011.13456) | -| [ipndm](./ipndm) | [**improved pseudo numerical methods for diffusion models (iPNDM)**](https://github.com/crowsonkb/v-diffusion-pytorch/blob/987f8985e38208345c1959b0ea767a625831cc9b/diffusion/sampling.py#L296) | -| [score_sde_vp](./score_sde_vp) | [**Variance preserving stochastic differential equation (VP-SDE) scheduler**](https://arxiv.org/abs/2011.13456) | -| [euler](./euler) | [**Euler scheduler**](https://arxiv.org/abs/2206.00364) | -| [euler_ancestral](./euler_ancestral) | [**Euler Ancestral scheduler**](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72) | -| [vq_diffusion](./vq_diffusion) | [**VQDiffusionScheduler**](https://arxiv.org/abs/2111.14822) | -| [unipc](./unipc) | [**UniPCMultistepScheduler**](https://arxiv.org/abs/2302.04867) | -| [repaint](./repaint) | [**RePaint scheduler**](https://arxiv.org/abs/2201.09865) | - -## API - -The core API for any new scheduler must follow a limited structure. -- Schedulers should provide one or more `def step(...)` functions that should be called to update the generated sample iteratively. -- Schedulers should provide a `set_timesteps(...)` method that configures the parameters of a schedule function for a specific inference task. -- Schedulers should be framework-specific. - -The base class [`SchedulerMixin`] implements low level utilities used by multiple schedulers. - -### SchedulerMixin -[[autodoc]] SchedulerMixin - -### SchedulerOutput -The class [`SchedulerOutput`] contains the outputs from any schedulers `step(...)` call. - -[[autodoc]] schedulers.scheduling_utils.SchedulerOutput - -### KarrasDiffusionSchedulers - -`KarrasDiffusionSchedulers` encompasses the main generalization of schedulers in Diffusers. The schedulers in this class are distinguished, at a high level, by their noise sampling strategy; the type of network and scaling; and finally the training strategy or how the loss is weighed. - -The different schedulers, depending on the type of ODE solver, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in Diffusers. The schedulers in this class are given below: - -[[autodoc]] schedulers.scheduling_utils.KarrasDiffusionSchedulers diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py deleted file mode 100644 index 6e18f71b31b9fb85a6ca7a6b05ff4d2313951750..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py +++ /dev/null @@ -1,112 +0,0 @@ -# model settings -norm_cfg = dict(type='BN', requires_grad=False) -model = dict( - type='FasterRCNN', - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='ResNet', - depth=50, - num_stages=3, - strides=(1, 2, 2), - dilations=(1, 1, 1), - out_indices=(2, ), - frozen_stages=1, - norm_cfg=norm_cfg, - norm_eval=True, - style='caffe'), - rpn_head=dict( - type='RPNHead', - in_channels=1024, - feat_channels=1024, - anchor_generator=dict( - type='AnchorGenerator', - scales=[2, 4, 8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[16]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - shared_head=dict( - type='ResLayer', - depth=50, - stage=3, - stride=2, - dilation=1, - style='caffe', - norm_cfg=norm_cfg, - norm_eval=True), - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=1024, - featmap_strides=[16]), - bbox_head=dict( - type='BBoxHead', - with_avg_pool=True, - roi_feat_size=7, - in_channels=2048, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=12000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=6000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100))) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/data_processor.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/data_processor.py deleted file mode 100644 index f019f427fe43ae6169be835679a6d07e938a2753..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/data_processor.py +++ /dev/null @@ -1,209 +0,0 @@ -""" -This module is responsible for processing the corpus and feeding it into chromaDB. It will receive a corpus of text. -It will then split it into chunks of specified length. For each of those chunks, it will append surrounding context. -It will only include full words. -""" - -import re -import bisect - -import extensions.superboogav2.parameters as parameters - -from .data_preprocessor import TextPreprocessorBuilder, TextSummarizer -from .chromadb import ChromaCollector - -def preprocess_text_no_summary(text) -> str: - builder = TextPreprocessorBuilder(text) - if parameters.should_to_lower(): - builder.to_lower() - - if parameters.should_remove_punctuation(): - builder.remove_punctuation() - - if parameters.should_remove_specific_pos(): - builder.remove_specific_pos() - - if parameters.should_remove_stopwords(): - builder.remove_stopwords - - if parameters.should_lemmatize(): - builder.lemmatize() - - if parameters.should_merge_spaces(): - builder.merge_spaces - - if parameters.should_strip(): - builder.strip() - - if parameters.get_num_conversion_strategy(): - if parameters.get_num_conversion_strategy() == parameters.NUM_TO_WORD_METHOD: - builder.num_to_word(parameters.get_min_num_length()) - elif parameters.get_num_conversion_strategy() == parameters.NUM_TO_CHAR_METHOD: - builder.num_to_char(parameters.get_min_num_length()) - elif parameters.get_num_conversion_strategy() == parameters.NUM_TO_CHAR_LONG_METHOD: - builder.num_to_char_long(parameters.get_min_num_length()) - - return builder.build() - - -def preprocess_text(text) -> list[str]: - important_sentences = TextSummarizer.process_long_text(text, parameters.get_min_num_sentences()) - return [preprocess_text_no_summary(sent) for sent in important_sentences] - - -def _create_chunks_with_context(corpus, chunk_len, context_left, context_right): - """ - This function takes a corpus of text and splits it into chunks of a specified length, - then adds a specified amount of context to each chunk. The context is added by first - going backwards from the start of the chunk and then going forwards from the end of the - chunk, ensuring that the context includes only whole words and that the total context length - does not exceed the specified limit. This function uses binary search for efficiency. - - Returns: - chunks (list of str): The chunks of text. - chunks_with_context (list of str): The chunks of text with added context. - chunk_with_context_start_indices (list of int): The starting indices of each chunk with context in the corpus. - """ - words = re.split('(\\s+)', corpus) - word_start_indices = [0] - current_index = 0 - - for word in words: - current_index += len(word) - word_start_indices.append(current_index) - - chunks, chunk_lengths, chunk_start_indices, chunk_with_context_start_indices = [], [], [], [] - current_length = 0 - current_index = 0 - chunk = [] - - for word in words: - if current_length + len(word) > chunk_len: - chunks.append(''.join(chunk)) - chunk_lengths.append(current_length) - chunk_start_indices.append(current_index - current_length) - chunk = [word] - current_length = len(word) - else: - chunk.append(word) - current_length += len(word) - current_index += len(word) - - if chunk: - chunks.append(''.join(chunk)) - chunk_lengths.append(current_length) - chunk_start_indices.append(current_index - current_length) - - chunks_with_context = [] - for start_index, chunk_length in zip(chunk_start_indices, chunk_lengths): - context_start_index = bisect.bisect_right(word_start_indices, start_index - context_left) - context_end_index = bisect.bisect_left(word_start_indices, start_index + chunk_length + context_right) - - # Combine all the words in the context range (before, chunk, and after) - chunk_with_context = ''.join(words[context_start_index:context_end_index]) - chunks_with_context.append(chunk_with_context) - - # Determine the start index of the chunk with context - chunk_with_context_start_index = word_start_indices[context_start_index] - chunk_with_context_start_indices.append(chunk_with_context_start_index) - - return chunks, chunks_with_context, chunk_with_context_start_indices - - -def _clear_chunks(data_chunks, data_chunks_with_context, data_chunk_starting_indices): - distinct_data_chunks = [] - distinct_data_chunks_with_context = [] - distinct_data_chunk_starting_indices = [] - - seen_chunks = dict() - - for chunk, context, index in zip(data_chunks, data_chunks_with_context, data_chunk_starting_indices): - # Skip the chunk if it does not contain any alphanumeric characters - if not any(char.isalnum() for char in chunk): - continue - - seen_chunk_start = seen_chunks.get(chunk) - if seen_chunk_start: - # If we've already seen this exact chunk, and the context around it it very close to the seen chunk, then skip it. - if abs(seen_chunk_start-index) < parameters.get_delta_start(): - continue - - distinct_data_chunks.append(chunk) - distinct_data_chunks_with_context.append(context) - distinct_data_chunk_starting_indices.append(index) - - seen_chunks[chunk] = index - - return distinct_data_chunks, distinct_data_chunks_with_context, distinct_data_chunk_starting_indices - - -def process_and_add_to_collector(corpus: str, collector: ChromaCollector, clear_collector_before_adding: bool, metadata: dict): - # Defining variables - chunk_lens = [int(len.strip()) for len in parameters.get_chunk_len().split(',')] - context_len = [int(len.strip()) for len in parameters.get_context_len().split(',')] - if len(context_len) >= 3: - raise f"Context len has too many values: {len(context_len)}" - if len(context_len) == 2: - context_left = context_len[0] - context_right = context_len[1] - else: - context_left = context_right = context_len[0] - - data_chunks = [] - data_chunks_with_context = [] - data_chunk_starting_indices = [] - - # Handling chunk_regex - if parameters.get_chunk_regex(): - if parameters.get_chunk_separator(): - cumulative_length = 0 # This variable will store the length of the processed corpus - sections = corpus.split(parameters.get_chunk_separator()) - for section in sections: - special_chunks = list(re.finditer(parameters.get_chunk_regex(), section)) - for match in special_chunks: - chunk = match.group(0) - start_index = match.start() - end_index = start_index + len(chunk) - context = section[max(0, start_index - context_left):min(len(section), end_index + context_right)] - data_chunks.append(chunk) - data_chunks_with_context.append(context) - data_chunk_starting_indices.append(cumulative_length + max(0, start_index - context_left)) - cumulative_length += len(section) + len(parameters.get_chunk_separator()) # Update the length of the processed corpus - else: - special_chunks = list(re.finditer(parameters.get_chunk_regex(), corpus)) - for match in special_chunks: - chunk = match.group(0) - start_index = match.start() - end_index = start_index + len(chunk) - context = corpus[max(0, start_index - context_left):min(len(corpus), end_index + context_right)] - data_chunks.append(chunk) - data_chunks_with_context.append(context) - data_chunk_starting_indices.append(max(0, start_index - context_left)) - - for chunk_len in chunk_lens: - # Breaking the data into chunks and adding those to the db - if parameters.get_chunk_separator(): - cumulative_length = 0 # This variable will store the length of the processed corpus - sections = corpus.split(parameters.get_chunk_separator()) - for section in sections: - chunks, chunks_with_context, context_start_indices = _create_chunks_with_context(section, chunk_len, context_left, context_right) - context_start_indices = [cumulative_length + i for i in context_start_indices] # Add the length of the processed corpus to each start index - data_chunks.extend(chunks) - data_chunks_with_context.extend(chunks_with_context) - data_chunk_starting_indices.extend(context_start_indices) - cumulative_length += len(section) + len(parameters.get_chunk_separator()) # Update the length of the processed corpus - else: - chunks, chunks_with_context, context_start_indices = _create_chunks_with_context(corpus, chunk_len, context_left, context_right) - data_chunks.extend(chunks) - data_chunks_with_context.extend(chunks_with_context) - data_chunk_starting_indices.extend(context_start_indices) - - data_chunks = [preprocess_text_no_summary(chunk) for chunk in data_chunks] - - data_chunks, data_chunks_with_context, data_chunk_starting_indices = _clear_chunks( - data_chunks, data_chunks_with_context, data_chunk_starting_indices - ) - - if clear_collector_before_adding: - collector.clear() - collector.add(data_chunks, data_chunks_with_context, data_chunk_starting_indices, [metadata]*len(data_chunks) if metadata is not None else None) \ No newline at end of file diff --git a/spaces/Archan/ArXivAudio/README.md b/spaces/Archan/ArXivAudio/README.md deleted file mode 100644 index d2eacb6a8f9a3d48d7fe7f0e03def3833855704f..0000000000000000000000000000000000000000 --- a/spaces/Archan/ArXivAudio/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ArXiv Audio -emoji: 🖨️ -colorFrom: cyan -colorTo: red -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/filesystem.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/filesystem.py deleted file mode 100644 index 83c2df75b963e5866b63aaf0f4446a8ca61aebce..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/filesystem.py +++ /dev/null @@ -1,153 +0,0 @@ -import fnmatch -import os -import os.path -import random -import sys -from contextlib import contextmanager -from tempfile import NamedTemporaryFile -from typing import Any, BinaryIO, Generator, List, Union, cast - -from pip._vendor.tenacity import retry, stop_after_delay, wait_fixed - -from pip._internal.utils.compat import get_path_uid -from pip._internal.utils.misc import format_size - - -def check_path_owner(path: str) -> bool: - # If we don't have a way to check the effective uid of this process, then - # we'll just assume that we own the directory. - if sys.platform == "win32" or not hasattr(os, "geteuid"): - return True - - assert os.path.isabs(path) - - previous = None - while path != previous: - if os.path.lexists(path): - # Check if path is writable by current user. - if os.geteuid() == 0: - # Special handling for root user in order to handle properly - # cases where users use sudo without -H flag. - try: - path_uid = get_path_uid(path) - except OSError: - return False - return path_uid == 0 - else: - return os.access(path, os.W_OK) - else: - previous, path = path, os.path.dirname(path) - return False # assume we don't own the path - - -@contextmanager -def adjacent_tmp_file(path: str, **kwargs: Any) -> Generator[BinaryIO, None, None]: - """Return a file-like object pointing to a tmp file next to path. - - The file is created securely and is ensured to be written to disk - after the context reaches its end. - - kwargs will be passed to tempfile.NamedTemporaryFile to control - the way the temporary file will be opened. - """ - with NamedTemporaryFile( - delete=False, - dir=os.path.dirname(path), - prefix=os.path.basename(path), - suffix=".tmp", - **kwargs, - ) as f: - result = cast(BinaryIO, f) - try: - yield result - finally: - result.flush() - os.fsync(result.fileno()) - - -# Tenacity raises RetryError by default, explicitly raise the original exception -_replace_retry = retry(reraise=True, stop=stop_after_delay(1), wait=wait_fixed(0.25)) - -replace = _replace_retry(os.replace) - - -# test_writable_dir and _test_writable_dir_win are copied from Flit, -# with the author's agreement to also place them under pip's license. -def test_writable_dir(path: str) -> bool: - """Check if a directory is writable. - - Uses os.access() on POSIX, tries creating files on Windows. - """ - # If the directory doesn't exist, find the closest parent that does. - while not os.path.isdir(path): - parent = os.path.dirname(path) - if parent == path: - break # Should never get here, but infinite loops are bad - path = parent - - if os.name == "posix": - return os.access(path, os.W_OK) - - return _test_writable_dir_win(path) - - -def _test_writable_dir_win(path: str) -> bool: - # os.access doesn't work on Windows: http://bugs.python.org/issue2528 - # and we can't use tempfile: http://bugs.python.org/issue22107 - basename = "accesstest_deleteme_fishfingers_custard_" - alphabet = "abcdefghijklmnopqrstuvwxyz0123456789" - for _ in range(10): - name = basename + "".join(random.choice(alphabet) for _ in range(6)) - file = os.path.join(path, name) - try: - fd = os.open(file, os.O_RDWR | os.O_CREAT | os.O_EXCL) - except FileExistsError: - pass - except PermissionError: - # This could be because there's a directory with the same name. - # But it's highly unlikely there's a directory called that, - # so we'll assume it's because the parent dir is not writable. - # This could as well be because the parent dir is not readable, - # due to non-privileged user access. - return False - else: - os.close(fd) - os.unlink(file) - return True - - # This should never be reached - raise OSError("Unexpected condition testing for writable directory") - - -def find_files(path: str, pattern: str) -> List[str]: - """Returns a list of absolute paths of files beneath path, recursively, - with filenames which match the UNIX-style shell glob pattern.""" - result: List[str] = [] - for root, _, files in os.walk(path): - matches = fnmatch.filter(files, pattern) - result.extend(os.path.join(root, f) for f in matches) - return result - - -def file_size(path: str) -> Union[int, float]: - # If it's a symlink, return 0. - if os.path.islink(path): - return 0 - return os.path.getsize(path) - - -def format_file_size(path: str) -> str: - return format_size(file_size(path)) - - -def directory_size(path: str) -> Union[int, float]: - size = 0.0 - for root, _dirs, files in os.walk(path): - for filename in files: - file_path = os.path.join(root, filename) - size += file_size(file_path) - return size - - -def format_directory_size(path: str) -> str: - return format_size(directory_size(path)) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/terminal.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/terminal.py deleted file mode 100644 index e0bda16a236bfcf2c17068f2ff0cb8551830244a..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/terminal.py +++ /dev/null @@ -1,127 +0,0 @@ -""" - pygments.formatters.terminal - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for terminal output with ANSI sequences. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.token import Keyword, Name, Comment, String, Error, \ - Number, Operator, Generic, Token, Whitespace -from pip._vendor.pygments.console import ansiformat -from pip._vendor.pygments.util import get_choice_opt - - -__all__ = ['TerminalFormatter'] - - -#: Map token types to a tuple of color values for light and dark -#: backgrounds. -TERMINAL_COLORS = { - Token: ('', ''), - - Whitespace: ('gray', 'brightblack'), - Comment: ('gray', 'brightblack'), - Comment.Preproc: ('cyan', 'brightcyan'), - Keyword: ('blue', 'brightblue'), - Keyword.Type: ('cyan', 'brightcyan'), - Operator.Word: ('magenta', 'brightmagenta'), - Name.Builtin: ('cyan', 'brightcyan'), - Name.Function: ('green', 'brightgreen'), - Name.Namespace: ('_cyan_', '_brightcyan_'), - Name.Class: ('_green_', '_brightgreen_'), - Name.Exception: ('cyan', 'brightcyan'), - Name.Decorator: ('brightblack', 'gray'), - Name.Variable: ('red', 'brightred'), - Name.Constant: ('red', 'brightred'), - Name.Attribute: ('cyan', 'brightcyan'), - Name.Tag: ('brightblue', 'brightblue'), - String: ('yellow', 'yellow'), - Number: ('blue', 'brightblue'), - - Generic.Deleted: ('brightred', 'brightred'), - Generic.Inserted: ('green', 'brightgreen'), - Generic.Heading: ('**', '**'), - Generic.Subheading: ('*magenta*', '*brightmagenta*'), - Generic.Prompt: ('**', '**'), - Generic.Error: ('brightred', 'brightred'), - - Error: ('_brightred_', '_brightred_'), -} - - -class TerminalFormatter(Formatter): - r""" - Format tokens with ANSI color sequences, for output in a text console. - Color sequences are terminated at newlines, so that paging the output - works correctly. - - The `get_style_defs()` method doesn't do anything special since there is - no support for common styles. - - Options accepted: - - `bg` - Set to ``"light"`` or ``"dark"`` depending on the terminal's background - (default: ``"light"``). - - `colorscheme` - A dictionary mapping token types to (lightbg, darkbg) color names or - ``None`` (default: ``None`` = use builtin colorscheme). - - `linenos` - Set to ``True`` to have line numbers on the terminal output as well - (default: ``False`` = no line numbers). - """ - name = 'Terminal' - aliases = ['terminal', 'console'] - filenames = [] - - def __init__(self, **options): - Formatter.__init__(self, **options) - self.darkbg = get_choice_opt(options, 'bg', - ['light', 'dark'], 'light') == 'dark' - self.colorscheme = options.get('colorscheme', None) or TERMINAL_COLORS - self.linenos = options.get('linenos', False) - self._lineno = 0 - - def format(self, tokensource, outfile): - return Formatter.format(self, tokensource, outfile) - - def _write_lineno(self, outfile): - self._lineno += 1 - outfile.write("%s%04d: " % (self._lineno != 1 and '\n' or '', self._lineno)) - - def _get_color(self, ttype): - # self.colorscheme is a dict containing usually generic types, so we - # have to walk the tree of dots. The base Token type must be a key, - # even if it's empty string, as in the default above. - colors = self.colorscheme.get(ttype) - while colors is None: - ttype = ttype.parent - colors = self.colorscheme.get(ttype) - return colors[self.darkbg] - - def format_unencoded(self, tokensource, outfile): - if self.linenos: - self._write_lineno(outfile) - - for ttype, value in tokensource: - color = self._get_color(ttype) - - for line in value.splitlines(True): - if color: - outfile.write(ansiformat(color, line.rstrip('\n'))) - else: - outfile.write(line.rstrip('\n')) - if line.endswith('\n'): - if self.linenos: - self._write_lineno(outfile) - else: - outfile.write('\n') - - if self.linenos: - outfile.write("\n") diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/models.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/models.py deleted file mode 100644 index 76e6f199c0042cec6500f53c062ff9ea1033e79d..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/models.py +++ /dev/null @@ -1,1034 +0,0 @@ -""" -requests.models -~~~~~~~~~~~~~~~ - -This module contains the primary objects that power Requests. -""" - -import datetime - -# Import encoding now, to avoid implicit import later. -# Implicit import within threads may cause LookupError when standard library is in a ZIP, -# such as in Embedded Python. See https://github.com/psf/requests/issues/3578. -import encodings.idna # noqa: F401 -from io import UnsupportedOperation - -from pip._vendor.urllib3.exceptions import ( - DecodeError, - LocationParseError, - ProtocolError, - ReadTimeoutError, - SSLError, -) -from pip._vendor.urllib3.fields import RequestField -from pip._vendor.urllib3.filepost import encode_multipart_formdata -from pip._vendor.urllib3.util import parse_url - -from ._internal_utils import to_native_string, unicode_is_ascii -from .auth import HTTPBasicAuth -from .compat import ( - Callable, - JSONDecodeError, - Mapping, - basestring, - builtin_str, - chardet, - cookielib, -) -from .compat import json as complexjson -from .compat import urlencode, urlsplit, urlunparse -from .cookies import _copy_cookie_jar, cookiejar_from_dict, get_cookie_header -from .exceptions import ( - ChunkedEncodingError, - ConnectionError, - ContentDecodingError, - HTTPError, - InvalidJSONError, - InvalidURL, -) -from .exceptions import JSONDecodeError as RequestsJSONDecodeError -from .exceptions import MissingSchema -from .exceptions import SSLError as RequestsSSLError -from .exceptions import StreamConsumedError -from .hooks import default_hooks -from .status_codes import codes -from .structures import CaseInsensitiveDict -from .utils import ( - check_header_validity, - get_auth_from_url, - guess_filename, - guess_json_utf, - iter_slices, - parse_header_links, - requote_uri, - stream_decode_response_unicode, - super_len, - to_key_val_list, -) - -#: The set of HTTP status codes that indicate an automatically -#: processable redirect. -REDIRECT_STATI = ( - codes.moved, # 301 - codes.found, # 302 - codes.other, # 303 - codes.temporary_redirect, # 307 - codes.permanent_redirect, # 308 -) - -DEFAULT_REDIRECT_LIMIT = 30 -CONTENT_CHUNK_SIZE = 10 * 1024 -ITER_CHUNK_SIZE = 512 - - -class RequestEncodingMixin: - @property - def path_url(self): - """Build the path URL to use.""" - - url = [] - - p = urlsplit(self.url) - - path = p.path - if not path: - path = "/" - - url.append(path) - - query = p.query - if query: - url.append("?") - url.append(query) - - return "".join(url) - - @staticmethod - def _encode_params(data): - """Encode parameters in a piece of data. - - Will successfully encode parameters when passed as a dict or a list of - 2-tuples. Order is retained if data is a list of 2-tuples but arbitrary - if parameters are supplied as a dict. - """ - - if isinstance(data, (str, bytes)): - return data - elif hasattr(data, "read"): - return data - elif hasattr(data, "__iter__"): - result = [] - for k, vs in to_key_val_list(data): - if isinstance(vs, basestring) or not hasattr(vs, "__iter__"): - vs = [vs] - for v in vs: - if v is not None: - result.append( - ( - k.encode("utf-8") if isinstance(k, str) else k, - v.encode("utf-8") if isinstance(v, str) else v, - ) - ) - return urlencode(result, doseq=True) - else: - return data - - @staticmethod - def _encode_files(files, data): - """Build the body for a multipart/form-data request. - - Will successfully encode files when passed as a dict or a list of - tuples. Order is retained if data is a list of tuples but arbitrary - if parameters are supplied as a dict. - The tuples may be 2-tuples (filename, fileobj), 3-tuples (filename, fileobj, contentype) - or 4-tuples (filename, fileobj, contentype, custom_headers). - """ - if not files: - raise ValueError("Files must be provided.") - elif isinstance(data, basestring): - raise ValueError("Data must not be a string.") - - new_fields = [] - fields = to_key_val_list(data or {}) - files = to_key_val_list(files or {}) - - for field, val in fields: - if isinstance(val, basestring) or not hasattr(val, "__iter__"): - val = [val] - for v in val: - if v is not None: - # Don't call str() on bytestrings: in Py3 it all goes wrong. - if not isinstance(v, bytes): - v = str(v) - - new_fields.append( - ( - field.decode("utf-8") - if isinstance(field, bytes) - else field, - v.encode("utf-8") if isinstance(v, str) else v, - ) - ) - - for (k, v) in files: - # support for explicit filename - ft = None - fh = None - if isinstance(v, (tuple, list)): - if len(v) == 2: - fn, fp = v - elif len(v) == 3: - fn, fp, ft = v - else: - fn, fp, ft, fh = v - else: - fn = guess_filename(v) or k - fp = v - - if isinstance(fp, (str, bytes, bytearray)): - fdata = fp - elif hasattr(fp, "read"): - fdata = fp.read() - elif fp is None: - continue - else: - fdata = fp - - rf = RequestField(name=k, data=fdata, filename=fn, headers=fh) - rf.make_multipart(content_type=ft) - new_fields.append(rf) - - body, content_type = encode_multipart_formdata(new_fields) - - return body, content_type - - -class RequestHooksMixin: - def register_hook(self, event, hook): - """Properly register a hook.""" - - if event not in self.hooks: - raise ValueError(f'Unsupported event specified, with event name "{event}"') - - if isinstance(hook, Callable): - self.hooks[event].append(hook) - elif hasattr(hook, "__iter__"): - self.hooks[event].extend(h for h in hook if isinstance(h, Callable)) - - def deregister_hook(self, event, hook): - """Deregister a previously registered hook. - Returns True if the hook existed, False if not. - """ - - try: - self.hooks[event].remove(hook) - return True - except ValueError: - return False - - -class Request(RequestHooksMixin): - """A user-created :class:`Request ` object. - - Used to prepare a :class:`PreparedRequest `, which is sent to the server. - - :param method: HTTP method to use. - :param url: URL to send. - :param headers: dictionary of headers to send. - :param files: dictionary of {filename: fileobject} files to multipart upload. - :param data: the body to attach to the request. If a dictionary or - list of tuples ``[(key, value)]`` is provided, form-encoding will - take place. - :param json: json for the body to attach to the request (if files or data is not specified). - :param params: URL parameters to append to the URL. If a dictionary or - list of tuples ``[(key, value)]`` is provided, form-encoding will - take place. - :param auth: Auth handler or (user, pass) tuple. - :param cookies: dictionary or CookieJar of cookies to attach to this request. - :param hooks: dictionary of callback hooks, for internal usage. - - Usage:: - - >>> import requests - >>> req = requests.Request('GET', 'https://httpbin.org/get') - >>> req.prepare() - - """ - - def __init__( - self, - method=None, - url=None, - headers=None, - files=None, - data=None, - params=None, - auth=None, - cookies=None, - hooks=None, - json=None, - ): - - # Default empty dicts for dict params. - data = [] if data is None else data - files = [] if files is None else files - headers = {} if headers is None else headers - params = {} if params is None else params - hooks = {} if hooks is None else hooks - - self.hooks = default_hooks() - for (k, v) in list(hooks.items()): - self.register_hook(event=k, hook=v) - - self.method = method - self.url = url - self.headers = headers - self.files = files - self.data = data - self.json = json - self.params = params - self.auth = auth - self.cookies = cookies - - def __repr__(self): - return f"" - - def prepare(self): - """Constructs a :class:`PreparedRequest ` for transmission and returns it.""" - p = PreparedRequest() - p.prepare( - method=self.method, - url=self.url, - headers=self.headers, - files=self.files, - data=self.data, - json=self.json, - params=self.params, - auth=self.auth, - cookies=self.cookies, - hooks=self.hooks, - ) - return p - - -class PreparedRequest(RequestEncodingMixin, RequestHooksMixin): - """The fully mutable :class:`PreparedRequest ` object, - containing the exact bytes that will be sent to the server. - - Instances are generated from a :class:`Request ` object, and - should not be instantiated manually; doing so may produce undesirable - effects. - - Usage:: - - >>> import requests - >>> req = requests.Request('GET', 'https://httpbin.org/get') - >>> r = req.prepare() - >>> r - - - >>> s = requests.Session() - >>> s.send(r) - - """ - - def __init__(self): - #: HTTP verb to send to the server. - self.method = None - #: HTTP URL to send the request to. - self.url = None - #: dictionary of HTTP headers. - self.headers = None - # The `CookieJar` used to create the Cookie header will be stored here - # after prepare_cookies is called - self._cookies = None - #: request body to send to the server. - self.body = None - #: dictionary of callback hooks, for internal usage. - self.hooks = default_hooks() - #: integer denoting starting position of a readable file-like body. - self._body_position = None - - def prepare( - self, - method=None, - url=None, - headers=None, - files=None, - data=None, - params=None, - auth=None, - cookies=None, - hooks=None, - json=None, - ): - """Prepares the entire request with the given parameters.""" - - self.prepare_method(method) - self.prepare_url(url, params) - self.prepare_headers(headers) - self.prepare_cookies(cookies) - self.prepare_body(data, files, json) - self.prepare_auth(auth, url) - - # Note that prepare_auth must be last to enable authentication schemes - # such as OAuth to work on a fully prepared request. - - # This MUST go after prepare_auth. Authenticators could add a hook - self.prepare_hooks(hooks) - - def __repr__(self): - return f"" - - def copy(self): - p = PreparedRequest() - p.method = self.method - p.url = self.url - p.headers = self.headers.copy() if self.headers is not None else None - p._cookies = _copy_cookie_jar(self._cookies) - p.body = self.body - p.hooks = self.hooks - p._body_position = self._body_position - return p - - def prepare_method(self, method): - """Prepares the given HTTP method.""" - self.method = method - if self.method is not None: - self.method = to_native_string(self.method.upper()) - - @staticmethod - def _get_idna_encoded_host(host): - from pip._vendor import idna - - try: - host = idna.encode(host, uts46=True).decode("utf-8") - except idna.IDNAError: - raise UnicodeError - return host - - def prepare_url(self, url, params): - """Prepares the given HTTP URL.""" - #: Accept objects that have string representations. - #: We're unable to blindly call unicode/str functions - #: as this will include the bytestring indicator (b'') - #: on python 3.x. - #: https://github.com/psf/requests/pull/2238 - if isinstance(url, bytes): - url = url.decode("utf8") - else: - url = str(url) - - # Remove leading whitespaces from url - url = url.lstrip() - - # Don't do any URL preparation for non-HTTP schemes like `mailto`, - # `data` etc to work around exceptions from `url_parse`, which - # handles RFC 3986 only. - if ":" in url and not url.lower().startswith("http"): - self.url = url - return - - # Support for unicode domain names and paths. - try: - scheme, auth, host, port, path, query, fragment = parse_url(url) - except LocationParseError as e: - raise InvalidURL(*e.args) - - if not scheme: - raise MissingSchema( - f"Invalid URL {url!r}: No scheme supplied. " - f"Perhaps you meant https://{url}?" - ) - - if not host: - raise InvalidURL(f"Invalid URL {url!r}: No host supplied") - - # In general, we want to try IDNA encoding the hostname if the string contains - # non-ASCII characters. This allows users to automatically get the correct IDNA - # behaviour. For strings containing only ASCII characters, we need to also verify - # it doesn't start with a wildcard (*), before allowing the unencoded hostname. - if not unicode_is_ascii(host): - try: - host = self._get_idna_encoded_host(host) - except UnicodeError: - raise InvalidURL("URL has an invalid label.") - elif host.startswith(("*", ".")): - raise InvalidURL("URL has an invalid label.") - - # Carefully reconstruct the network location - netloc = auth or "" - if netloc: - netloc += "@" - netloc += host - if port: - netloc += f":{port}" - - # Bare domains aren't valid URLs. - if not path: - path = "/" - - if isinstance(params, (str, bytes)): - params = to_native_string(params) - - enc_params = self._encode_params(params) - if enc_params: - if query: - query = f"{query}&{enc_params}" - else: - query = enc_params - - url = requote_uri(urlunparse([scheme, netloc, path, None, query, fragment])) - self.url = url - - def prepare_headers(self, headers): - """Prepares the given HTTP headers.""" - - self.headers = CaseInsensitiveDict() - if headers: - for header in headers.items(): - # Raise exception on invalid header value. - check_header_validity(header) - name, value = header - self.headers[to_native_string(name)] = value - - def prepare_body(self, data, files, json=None): - """Prepares the given HTTP body data.""" - - # Check if file, fo, generator, iterator. - # If not, run through normal process. - - # Nottin' on you. - body = None - content_type = None - - if not data and json is not None: - # urllib3 requires a bytes-like body. Python 2's json.dumps - # provides this natively, but Python 3 gives a Unicode string. - content_type = "application/json" - - try: - body = complexjson.dumps(json, allow_nan=False) - except ValueError as ve: - raise InvalidJSONError(ve, request=self) - - if not isinstance(body, bytes): - body = body.encode("utf-8") - - is_stream = all( - [ - hasattr(data, "__iter__"), - not isinstance(data, (basestring, list, tuple, Mapping)), - ] - ) - - if is_stream: - try: - length = super_len(data) - except (TypeError, AttributeError, UnsupportedOperation): - length = None - - body = data - - if getattr(body, "tell", None) is not None: - # Record the current file position before reading. - # This will allow us to rewind a file in the event - # of a redirect. - try: - self._body_position = body.tell() - except OSError: - # This differentiates from None, allowing us to catch - # a failed `tell()` later when trying to rewind the body - self._body_position = object() - - if files: - raise NotImplementedError( - "Streamed bodies and files are mutually exclusive." - ) - - if length: - self.headers["Content-Length"] = builtin_str(length) - else: - self.headers["Transfer-Encoding"] = "chunked" - else: - # Multi-part file uploads. - if files: - (body, content_type) = self._encode_files(files, data) - else: - if data: - body = self._encode_params(data) - if isinstance(data, basestring) or hasattr(data, "read"): - content_type = None - else: - content_type = "application/x-www-form-urlencoded" - - self.prepare_content_length(body) - - # Add content-type if it wasn't explicitly provided. - if content_type and ("content-type" not in self.headers): - self.headers["Content-Type"] = content_type - - self.body = body - - def prepare_content_length(self, body): - """Prepare Content-Length header based on request method and body""" - if body is not None: - length = super_len(body) - if length: - # If length exists, set it. Otherwise, we fallback - # to Transfer-Encoding: chunked. - self.headers["Content-Length"] = builtin_str(length) - elif ( - self.method not in ("GET", "HEAD") - and self.headers.get("Content-Length") is None - ): - # Set Content-Length to 0 for methods that can have a body - # but don't provide one. (i.e. not GET or HEAD) - self.headers["Content-Length"] = "0" - - def prepare_auth(self, auth, url=""): - """Prepares the given HTTP auth data.""" - - # If no Auth is explicitly provided, extract it from the URL first. - if auth is None: - url_auth = get_auth_from_url(self.url) - auth = url_auth if any(url_auth) else None - - if auth: - if isinstance(auth, tuple) and len(auth) == 2: - # special-case basic HTTP auth - auth = HTTPBasicAuth(*auth) - - # Allow auth to make its changes. - r = auth(self) - - # Update self to reflect the auth changes. - self.__dict__.update(r.__dict__) - - # Recompute Content-Length - self.prepare_content_length(self.body) - - def prepare_cookies(self, cookies): - """Prepares the given HTTP cookie data. - - This function eventually generates a ``Cookie`` header from the - given cookies using cookielib. Due to cookielib's design, the header - will not be regenerated if it already exists, meaning this function - can only be called once for the life of the - :class:`PreparedRequest ` object. Any subsequent calls - to ``prepare_cookies`` will have no actual effect, unless the "Cookie" - header is removed beforehand. - """ - if isinstance(cookies, cookielib.CookieJar): - self._cookies = cookies - else: - self._cookies = cookiejar_from_dict(cookies) - - cookie_header = get_cookie_header(self._cookies, self) - if cookie_header is not None: - self.headers["Cookie"] = cookie_header - - def prepare_hooks(self, hooks): - """Prepares the given hooks.""" - # hooks can be passed as None to the prepare method and to this - # method. To prevent iterating over None, simply use an empty list - # if hooks is False-y - hooks = hooks or [] - for event in hooks: - self.register_hook(event, hooks[event]) - - -class Response: - """The :class:`Response ` object, which contains a - server's response to an HTTP request. - """ - - __attrs__ = [ - "_content", - "status_code", - "headers", - "url", - "history", - "encoding", - "reason", - "cookies", - "elapsed", - "request", - ] - - def __init__(self): - self._content = False - self._content_consumed = False - self._next = None - - #: Integer Code of responded HTTP Status, e.g. 404 or 200. - self.status_code = None - - #: Case-insensitive Dictionary of Response Headers. - #: For example, ``headers['content-encoding']`` will return the - #: value of a ``'Content-Encoding'`` response header. - self.headers = CaseInsensitiveDict() - - #: File-like object representation of response (for advanced usage). - #: Use of ``raw`` requires that ``stream=True`` be set on the request. - #: This requirement does not apply for use internally to Requests. - self.raw = None - - #: Final URL location of Response. - self.url = None - - #: Encoding to decode with when accessing r.text. - self.encoding = None - - #: A list of :class:`Response ` objects from - #: the history of the Request. Any redirect responses will end - #: up here. The list is sorted from the oldest to the most recent request. - self.history = [] - - #: Textual reason of responded HTTP Status, e.g. "Not Found" or "OK". - self.reason = None - - #: A CookieJar of Cookies the server sent back. - self.cookies = cookiejar_from_dict({}) - - #: The amount of time elapsed between sending the request - #: and the arrival of the response (as a timedelta). - #: This property specifically measures the time taken between sending - #: the first byte of the request and finishing parsing the headers. It - #: is therefore unaffected by consuming the response content or the - #: value of the ``stream`` keyword argument. - self.elapsed = datetime.timedelta(0) - - #: The :class:`PreparedRequest ` object to which this - #: is a response. - self.request = None - - def __enter__(self): - return self - - def __exit__(self, *args): - self.close() - - def __getstate__(self): - # Consume everything; accessing the content attribute makes - # sure the content has been fully read. - if not self._content_consumed: - self.content - - return {attr: getattr(self, attr, None) for attr in self.__attrs__} - - def __setstate__(self, state): - for name, value in state.items(): - setattr(self, name, value) - - # pickled objects do not have .raw - setattr(self, "_content_consumed", True) - setattr(self, "raw", None) - - def __repr__(self): - return f"" - - def __bool__(self): - """Returns True if :attr:`status_code` is less than 400. - - This attribute checks if the status code of the response is between - 400 and 600 to see if there was a client error or a server error. If - the status code, is between 200 and 400, this will return True. This - is **not** a check to see if the response code is ``200 OK``. - """ - return self.ok - - def __nonzero__(self): - """Returns True if :attr:`status_code` is less than 400. - - This attribute checks if the status code of the response is between - 400 and 600 to see if there was a client error or a server error. If - the status code, is between 200 and 400, this will return True. This - is **not** a check to see if the response code is ``200 OK``. - """ - return self.ok - - def __iter__(self): - """Allows you to use a response as an iterator.""" - return self.iter_content(128) - - @property - def ok(self): - """Returns True if :attr:`status_code` is less than 400, False if not. - - This attribute checks if the status code of the response is between - 400 and 600 to see if there was a client error or a server error. If - the status code is between 200 and 400, this will return True. This - is **not** a check to see if the response code is ``200 OK``. - """ - try: - self.raise_for_status() - except HTTPError: - return False - return True - - @property - def is_redirect(self): - """True if this Response is a well-formed HTTP redirect that could have - been processed automatically (by :meth:`Session.resolve_redirects`). - """ - return "location" in self.headers and self.status_code in REDIRECT_STATI - - @property - def is_permanent_redirect(self): - """True if this Response one of the permanent versions of redirect.""" - return "location" in self.headers and self.status_code in ( - codes.moved_permanently, - codes.permanent_redirect, - ) - - @property - def next(self): - """Returns a PreparedRequest for the next request in a redirect chain, if there is one.""" - return self._next - - @property - def apparent_encoding(self): - """The apparent encoding, provided by the charset_normalizer or chardet libraries.""" - return chardet.detect(self.content)["encoding"] - - def iter_content(self, chunk_size=1, decode_unicode=False): - """Iterates over the response data. When stream=True is set on the - request, this avoids reading the content at once into memory for - large responses. The chunk size is the number of bytes it should - read into memory. This is not necessarily the length of each item - returned as decoding can take place. - - chunk_size must be of type int or None. A value of None will - function differently depending on the value of `stream`. - stream=True will read data as it arrives in whatever size the - chunks are received. If stream=False, data is returned as - a single chunk. - - If decode_unicode is True, content will be decoded using the best - available encoding based on the response. - """ - - def generate(): - # Special case for urllib3. - if hasattr(self.raw, "stream"): - try: - yield from self.raw.stream(chunk_size, decode_content=True) - except ProtocolError as e: - raise ChunkedEncodingError(e) - except DecodeError as e: - raise ContentDecodingError(e) - except ReadTimeoutError as e: - raise ConnectionError(e) - except SSLError as e: - raise RequestsSSLError(e) - else: - # Standard file-like object. - while True: - chunk = self.raw.read(chunk_size) - if not chunk: - break - yield chunk - - self._content_consumed = True - - if self._content_consumed and isinstance(self._content, bool): - raise StreamConsumedError() - elif chunk_size is not None and not isinstance(chunk_size, int): - raise TypeError( - f"chunk_size must be an int, it is instead a {type(chunk_size)}." - ) - # simulate reading small chunks of the content - reused_chunks = iter_slices(self._content, chunk_size) - - stream_chunks = generate() - - chunks = reused_chunks if self._content_consumed else stream_chunks - - if decode_unicode: - chunks = stream_decode_response_unicode(chunks, self) - - return chunks - - def iter_lines( - self, chunk_size=ITER_CHUNK_SIZE, decode_unicode=False, delimiter=None - ): - """Iterates over the response data, one line at a time. When - stream=True is set on the request, this avoids reading the - content at once into memory for large responses. - - .. note:: This method is not reentrant safe. - """ - - pending = None - - for chunk in self.iter_content( - chunk_size=chunk_size, decode_unicode=decode_unicode - ): - - if pending is not None: - chunk = pending + chunk - - if delimiter: - lines = chunk.split(delimiter) - else: - lines = chunk.splitlines() - - if lines and lines[-1] and chunk and lines[-1][-1] == chunk[-1]: - pending = lines.pop() - else: - pending = None - - yield from lines - - if pending is not None: - yield pending - - @property - def content(self): - """Content of the response, in bytes.""" - - if self._content is False: - # Read the contents. - if self._content_consumed: - raise RuntimeError("The content for this response was already consumed") - - if self.status_code == 0 or self.raw is None: - self._content = None - else: - self._content = b"".join(self.iter_content(CONTENT_CHUNK_SIZE)) or b"" - - self._content_consumed = True - # don't need to release the connection; that's been handled by urllib3 - # since we exhausted the data. - return self._content - - @property - def text(self): - """Content of the response, in unicode. - - If Response.encoding is None, encoding will be guessed using - ``charset_normalizer`` or ``chardet``. - - The encoding of the response content is determined based solely on HTTP - headers, following RFC 2616 to the letter. If you can take advantage of - non-HTTP knowledge to make a better guess at the encoding, you should - set ``r.encoding`` appropriately before accessing this property. - """ - - # Try charset from content-type - content = None - encoding = self.encoding - - if not self.content: - return "" - - # Fallback to auto-detected encoding. - if self.encoding is None: - encoding = self.apparent_encoding - - # Decode unicode from given encoding. - try: - content = str(self.content, encoding, errors="replace") - except (LookupError, TypeError): - # A LookupError is raised if the encoding was not found which could - # indicate a misspelling or similar mistake. - # - # A TypeError can be raised if encoding is None - # - # So we try blindly encoding. - content = str(self.content, errors="replace") - - return content - - def json(self, **kwargs): - r"""Returns the json-encoded content of a response, if any. - - :param \*\*kwargs: Optional arguments that ``json.loads`` takes. - :raises requests.exceptions.JSONDecodeError: If the response body does not - contain valid json. - """ - - if not self.encoding and self.content and len(self.content) > 3: - # No encoding set. JSON RFC 4627 section 3 states we should expect - # UTF-8, -16 or -32. Detect which one to use; If the detection or - # decoding fails, fall back to `self.text` (using charset_normalizer to make - # a best guess). - encoding = guess_json_utf(self.content) - if encoding is not None: - try: - return complexjson.loads(self.content.decode(encoding), **kwargs) - except UnicodeDecodeError: - # Wrong UTF codec detected; usually because it's not UTF-8 - # but some other 8-bit codec. This is an RFC violation, - # and the server didn't bother to tell us what codec *was* - # used. - pass - except JSONDecodeError as e: - raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) - - try: - return complexjson.loads(self.text, **kwargs) - except JSONDecodeError as e: - # Catch JSON-related errors and raise as requests.JSONDecodeError - # This aliases json.JSONDecodeError and simplejson.JSONDecodeError - raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) - - @property - def links(self): - """Returns the parsed header links of the response, if any.""" - - header = self.headers.get("link") - - resolved_links = {} - - if header: - links = parse_header_links(header) - - for link in links: - key = link.get("rel") or link.get("url") - resolved_links[key] = link - - return resolved_links - - def raise_for_status(self): - """Raises :class:`HTTPError`, if one occurred.""" - - http_error_msg = "" - if isinstance(self.reason, bytes): - # We attempt to decode utf-8 first because some servers - # choose to localize their reason strings. If the string - # isn't utf-8, we fall back to iso-8859-1 for all other - # encodings. (See PR #3538) - try: - reason = self.reason.decode("utf-8") - except UnicodeDecodeError: - reason = self.reason.decode("iso-8859-1") - else: - reason = self.reason - - if 400 <= self.status_code < 500: - http_error_msg = ( - f"{self.status_code} Client Error: {reason} for url: {self.url}" - ) - - elif 500 <= self.status_code < 600: - http_error_msg = ( - f"{self.status_code} Server Error: {reason} for url: {self.url}" - ) - - if http_error_msg: - raise HTTPError(http_error_msg, response=self) - - def close(self): - """Releases the connection back to the pool. Once this method has been - called the underlying ``raw`` object must not be accessed again. - - *Note: Should not normally need to be called explicitly.* - """ - if not self._content_consumed: - self.raw.close() - - release_conn = getattr(self.raw, "release_conn", None) - if release_conn is not None: - release_conn() diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/retry.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/retry.py deleted file mode 100644 index 38988739d6406aeb5e3be903c0ea6fb82752f328..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/retry.py +++ /dev/null @@ -1,272 +0,0 @@ -# Copyright 2016–2021 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import abc -import re -import typing - -if typing.TYPE_CHECKING: - from pip._vendor.tenacity import RetryCallState - - -class retry_base(abc.ABC): - """Abstract base class for retry strategies.""" - - @abc.abstractmethod - def __call__(self, retry_state: "RetryCallState") -> bool: - pass - - def __and__(self, other: "retry_base") -> "retry_all": - return retry_all(self, other) - - def __or__(self, other: "retry_base") -> "retry_any": - return retry_any(self, other) - - -RetryBaseT = typing.Union[retry_base, typing.Callable[["RetryCallState"], bool]] - - -class _retry_never(retry_base): - """Retry strategy that never rejects any result.""" - - def __call__(self, retry_state: "RetryCallState") -> bool: - return False - - -retry_never = _retry_never() - - -class _retry_always(retry_base): - """Retry strategy that always rejects any result.""" - - def __call__(self, retry_state: "RetryCallState") -> bool: - return True - - -retry_always = _retry_always() - - -class retry_if_exception(retry_base): - """Retry strategy that retries if an exception verifies a predicate.""" - - def __init__(self, predicate: typing.Callable[[BaseException], bool]) -> None: - self.predicate = predicate - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__() called before outcome was set") - - if retry_state.outcome.failed: - exception = retry_state.outcome.exception() - if exception is None: - raise RuntimeError("outcome failed but the exception is None") - return self.predicate(exception) - else: - return False - - -class retry_if_exception_type(retry_if_exception): - """Retries if an exception has been raised of one or more types.""" - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_types = exception_types - super().__init__(lambda e: isinstance(e, exception_types)) - - -class retry_if_not_exception_type(retry_if_exception): - """Retries except an exception has been raised of one or more types.""" - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_types = exception_types - super().__init__(lambda e: not isinstance(e, exception_types)) - - -class retry_unless_exception_type(retry_if_exception): - """Retries until an exception is raised of one or more types.""" - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_types = exception_types - super().__init__(lambda e: not isinstance(e, exception_types)) - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__() called before outcome was set") - - # always retry if no exception was raised - if not retry_state.outcome.failed: - return True - - exception = retry_state.outcome.exception() - if exception is None: - raise RuntimeError("outcome failed but the exception is None") - return self.predicate(exception) - - -class retry_if_exception_cause_type(retry_base): - """Retries if any of the causes of the raised exception is of one or more types. - - The check on the type of the cause of the exception is done recursively (until finding - an exception in the chain that has no `__cause__`) - """ - - def __init__( - self, - exception_types: typing.Union[ - typing.Type[BaseException], - typing.Tuple[typing.Type[BaseException], ...], - ] = Exception, - ) -> None: - self.exception_cause_types = exception_types - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__ called before outcome was set") - - if retry_state.outcome.failed: - exc = retry_state.outcome.exception() - while exc is not None: - if isinstance(exc.__cause__, self.exception_cause_types): - return True - exc = exc.__cause__ - - return False - - -class retry_if_result(retry_base): - """Retries if the result verifies a predicate.""" - - def __init__(self, predicate: typing.Callable[[typing.Any], bool]) -> None: - self.predicate = predicate - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__() called before outcome was set") - - if not retry_state.outcome.failed: - return self.predicate(retry_state.outcome.result()) - else: - return False - - -class retry_if_not_result(retry_base): - """Retries if the result refutes a predicate.""" - - def __init__(self, predicate: typing.Callable[[typing.Any], bool]) -> None: - self.predicate = predicate - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__() called before outcome was set") - - if not retry_state.outcome.failed: - return not self.predicate(retry_state.outcome.result()) - else: - return False - - -class retry_if_exception_message(retry_if_exception): - """Retries if an exception message equals or matches.""" - - def __init__( - self, - message: typing.Optional[str] = None, - match: typing.Optional[str] = None, - ) -> None: - if message and match: - raise TypeError(f"{self.__class__.__name__}() takes either 'message' or 'match', not both") - - # set predicate - if message: - - def message_fnc(exception: BaseException) -> bool: - return message == str(exception) - - predicate = message_fnc - elif match: - prog = re.compile(match) - - def match_fnc(exception: BaseException) -> bool: - return bool(prog.match(str(exception))) - - predicate = match_fnc - else: - raise TypeError(f"{self.__class__.__name__}() missing 1 required argument 'message' or 'match'") - - super().__init__(predicate) - - -class retry_if_not_exception_message(retry_if_exception_message): - """Retries until an exception message equals or matches.""" - - def __init__( - self, - message: typing.Optional[str] = None, - match: typing.Optional[str] = None, - ) -> None: - super().__init__(message, match) - # invert predicate - if_predicate = self.predicate - self.predicate = lambda *args_, **kwargs_: not if_predicate(*args_, **kwargs_) - - def __call__(self, retry_state: "RetryCallState") -> bool: - if retry_state.outcome is None: - raise RuntimeError("__call__() called before outcome was set") - - if not retry_state.outcome.failed: - return True - - exception = retry_state.outcome.exception() - if exception is None: - raise RuntimeError("outcome failed but the exception is None") - return self.predicate(exception) - - -class retry_any(retry_base): - """Retries if any of the retries condition is valid.""" - - def __init__(self, *retries: retry_base) -> None: - self.retries = retries - - def __call__(self, retry_state: "RetryCallState") -> bool: - return any(r(retry_state) for r in self.retries) - - -class retry_all(retry_base): - """Retries if all the retries condition are valid.""" - - def __init__(self, *retries: retry_base) -> None: - self.retries = retries - - def __call__(self, retry_state: "RetryCallState") -> bool: - return all(r(retry_state) for r in self.retries) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extern/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extern/__init__.py deleted file mode 100644 index d3a6dc99fe175507a94e3440da1f637f318add2f..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extern/__init__.py +++ /dev/null @@ -1,76 +0,0 @@ -import importlib.util -import sys - - -class VendorImporter: - """ - A PEP 302 meta path importer for finding optionally-vendored - or otherwise naturally-installed packages from root_name. - """ - - def __init__(self, root_name, vendored_names=(), vendor_pkg=None): - self.root_name = root_name - self.vendored_names = set(vendored_names) - self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor') - - @property - def search_path(self): - """ - Search first the vendor package then as a natural package. - """ - yield self.vendor_pkg + '.' - yield '' - - def _module_matches_namespace(self, fullname): - """Figure out if the target module is vendored.""" - root, base, target = fullname.partition(self.root_name + '.') - return not root and any(map(target.startswith, self.vendored_names)) - - def load_module(self, fullname): - """ - Iterate over the search path to locate and load fullname. - """ - root, base, target = fullname.partition(self.root_name + '.') - for prefix in self.search_path: - try: - extant = prefix + target - __import__(extant) - mod = sys.modules[extant] - sys.modules[fullname] = mod - return mod - except ImportError: - pass - else: - raise ImportError( - "The '{target}' package is required; " - "normally this is bundled with this package so if you get " - "this warning, consult the packager of your " - "distribution.".format(**locals()) - ) - - def create_module(self, spec): - return self.load_module(spec.name) - - def exec_module(self, module): - pass - - def find_spec(self, fullname, path=None, target=None): - """Return a module spec for vendored names.""" - return ( - importlib.util.spec_from_loader(fullname, self) - if self._module_matches_namespace(fullname) else None - ) - - def install(self): - """ - Install this importer into sys.meta_path if not already present. - """ - if self not in sys.meta_path: - sys.meta_path.append(self) - - -names = ( - 'packaging', 'pyparsing', 'ordered_set', 'more_itertools', 'importlib_metadata', - 'zipp', 'importlib_resources', 'jaraco', 'typing_extensions', 'tomli', -) -VendorImporter(__name__, names, 'setuptools._vendor').install() diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_coco.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_coco.py deleted file mode 100644 index caabead5527639056daeef71027a69c47ee2ebf7..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_coco.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import numpy as np -import os -import tempfile -import unittest -import pycocotools.mask as mask_util - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.coco import convert_to_coco_dict, load_coco_json -from detectron2.structures import BoxMode - - -def make_mask(): - """ - Makes a donut shaped binary mask. - """ - H = 100 - W = 100 - mask = np.zeros([H, W], dtype=np.uint8) - for x in range(W): - for y in range(H): - d = np.linalg.norm(np.array([W, H]) / 2 - np.array([x, y])) - if d > 10 and d < 20: - mask[y, x] = 1 - return mask - - -def uncompressed_rle(mask): - l = mask.flatten(order="F").tolist() - counts = [] - p = False - cnt = 0 - for i in l: - if i == p: - cnt += 1 - else: - counts.append(cnt) - p = i - cnt = 1 - counts.append(cnt) - return {"counts": counts, "size": [mask.shape[0], mask.shape[1]]} - - -def make_dataset_dicts(mask, compressed: bool = True): - """ - Returns a list of dicts that represents a single COCO data point for - object detection. The single instance given by `mask` is represented by - RLE, either compressed or uncompressed. - """ - record = {} - record["file_name"] = "test" - record["image_id"] = 0 - record["height"] = mask.shape[0] - record["width"] = mask.shape[1] - - y, x = np.nonzero(mask) - if compressed: - segmentation = mask_util.encode(np.asarray(mask, order="F")) - else: - segmentation = uncompressed_rle(mask) - min_x = np.min(x) - max_x = np.max(x) - min_y = np.min(y) - max_y = np.max(y) - obj = { - "bbox": [min_x, min_y, max_x, max_y], - "bbox_mode": BoxMode.XYXY_ABS, - "category_id": 0, - "iscrowd": 0, - "segmentation": segmentation, - } - record["annotations"] = [obj] - return [record] - - -class TestRLEToJson(unittest.TestCase): - def test(self): - # Make a dummy dataset. - mask = make_mask() - DatasetCatalog.register("test_dataset", lambda: make_dataset_dicts(mask)) - MetadataCatalog.get("test_dataset").set(thing_classes=["test_label"]) - - # Dump to json. - json_dict = convert_to_coco_dict("test_dataset") - with tempfile.TemporaryDirectory() as tmpdir: - json_file_name = os.path.join(tmpdir, "test.json") - with open(json_file_name, "w") as f: - json.dump(json_dict, f) - # Load from json. - dicts = load_coco_json(json_file_name, "") - - # Check the loaded mask matches the original. - anno = dicts[0]["annotations"][0] - loaded_mask = mask_util.decode(anno["segmentation"]) - self.assertTrue(np.array_equal(loaded_mask, mask)) - DatasetCatalog.pop("test_dataset") - MetadataCatalog.pop("test_dataset") - - def test_uncompressed_RLE(self): - mask = make_mask() - rle = mask_util.encode(np.asarray(mask, order="F")) - uncompressed = uncompressed_rle(mask) - compressed = mask_util.frPyObjects(uncompressed, *rle["size"]) - self.assertEqual(rle, compressed) - - -class TestConvertCOCO(unittest.TestCase): - @staticmethod - def generate_data(): - record = { - "file_name": "test", - "image_id": 0, - "height": 100, - "width": 100, - "annotations": [ - { - "bbox": [10, 10, 10, 10, 5], - "bbox_mode": BoxMode.XYWHA_ABS, - "category_id": 0, - "iscrowd": 0, - }, - { - "bbox": [15, 15, 3, 3], - "bbox_mode": BoxMode.XYXY_ABS, - "category_id": 0, - "iscrowd": 0, - }, - ], - } - - return [record] - - def test_convert_to_coco(self): - DatasetCatalog.register("test_dataset", lambda: TestConvertCOCO.generate_data()) - MetadataCatalog.get("test_dataset").set(thing_classes=["test_label"]) - convert_to_coco_dict("test_dataset") - DatasetCatalog.pop("test_dataset") - MetadataCatalog.pop("test_dataset") diff --git a/spaces/Bagus/speaker-verification-demo/README.md b/spaces/Bagus/speaker-verification-demo/README.md deleted file mode 100644 index d0fbce1083bcd2fc1f32b9b318039d5bf87d51ea..0000000000000000000000000000000000000000 --- a/spaces/Bagus/speaker-verification-demo/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: Speaker Verification Demo -emoji: 😻 -colorFrom: yellow -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -title: string -Display title for the Space - -emoji: string -Space emoji (emoji-only character allowed) - -colorFrom: string -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -colorTo: string -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -sdk: string -Can be either gradio or streamlit - -sdk_version : string -Only applicable for streamlit SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - - -app_file: string -Path to your main application file (which contains either gradio or streamlit Python code). -Path is relative to the root of the repository. - - -pinned: boolean Whether the Space stays on top of your list. - diff --git a/spaces/BalaBhaskarudu/mygenAIChatbot/README.md b/spaces/BalaBhaskarudu/mygenAIChatbot/README.md deleted file mode 100644 index 12732841ada6d79aa78a455d9fe97d362686e92e..0000000000000000000000000000000000000000 --- a/spaces/BalaBhaskarudu/mygenAIChatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MygenAIChatbot -emoji: 🔥 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/BartPoint/VoiceChange/infer_pack/onnx_inference.py b/spaces/BartPoint/VoiceChange/infer_pack/onnx_inference.py deleted file mode 100644 index 322572820dfc75d789e40ce5bbd9415066a03979..0000000000000000000000000000000000000000 --- a/spaces/BartPoint/VoiceChange/infer_pack/onnx_inference.py +++ /dev/null @@ -1,139 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from infer_pack.modules.F0Predictor.HarvestF0Predictor import HarvestF0Predictor - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/Benson/text-generation/Examples/Bubble Sort C.md b/spaces/Benson/text-generation/Examples/Bubble Sort C.md deleted file mode 100644 index 274c142c3242e431f851421bcc2815cc27684b03..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Bubble Sort C.md +++ /dev/null @@ -1,80 +0,0 @@ -
      -

      Clasificación de burbujas en C++: Guía para principiantes

      -

      Si usted está aprendiendo acerca de los algoritmos de clasificación, es posible que haya llegado a través de la clasificación de burbujas. La clasificación de burbujas es uno de los algoritmos de clasificación más simples e intuitivos que funciona intercambiando elementos adyacentes repetidamente si están en el orden equivocado. En este artículo, aprenderás qué es el ordenamiento de burbujas, cómo funciona, cuál es su complejidad temporal, cuáles son sus ventajas y desventajas y cómo implementarlo en C++.

      -

      bubble sort c++


      Download Zip ->>->>->> https://bltlly.com/2v6JjI



      -

      ¿Qué es la clasificación de burbujas?

      -

      La clasificación de burbujas es un algoritmo de clasificación que compara cada par de elementos adyacentes en una matriz y los intercambia si están en el orden equivocado. El algoritmo repite este proceso hasta que la matriz se ordena. La clasificación de burbuja de nombre viene del hecho de que los elementos más pequeños o más grandes "burbuja" al final de la matriz después de cada iteración.

      -

      ¿Cómo funciona la clasificación de burbujas?

      -

      Digamos que queremos ordenar una matriz de enteros en orden ascendente usando la clasificación de burbujas. Estos son los pasos que debemos seguir:

      -
        -
      1. Comience desde el primer elemento del array y compárelo con el segundo elemento. Si el primer elemento es mayor que el segundo elemento, cámbielos.
      2. -
      3. Mover al siguiente par de elementos y compararlos. Si están en el orden equivocado, intercambiarlos.
      4. -
      5. Continúe este proceso hasta que lleguemos al final del array. En este punto, el elemento más grande estará en la última posición del array.
      6. -
      7. Repita los pasos del 1 al 3 para los elementos no clasificados restantes, excluyendo el último elemento, que ya está ordenado.
      8. -
      9. Detener cuando no hay más swaps o cuando la matriz está completamente ordenada.
      10. -
      -

      ¿Cuál es la complejidad temporal de la clasificación de burbujas?

      -

      La complejidad de tiempo de un algoritmo mide qué tan rápido se ejecuta en función del tamaño de la entrada. Para la clasificación de burbujas, podemos analizar cuántas comparaciones e intercambios realiza en el peor de los casos, el caso promedio y los mejores escenarios.

      -

      -
        - -
      • El escenario de caso promedio para el ordenamiento de burbujas ocurre cuando el arreglo se ordena aleatoriamente. En este caso, podemos asumir que la mitad de las comparaciones resultan en swaps y la mitad no. Por lo tanto, la complejidad promedio de tiempo de caso de la clasificación de burbujas es también O(n).
      • -
      • El mejor escenario para la clasificación de burbujas ocurre cuando la matriz ya está ordenada. En este caso, solo necesitamos realizar comparaciones n-1 y sin swaps para cada iteración. Por lo tanto, el mejor caso de complejidad de tiempo de clasificación de burbujas es O(n).
      • -
      -

      ¿Cuáles son las ventajas y desventajas de la clasificación de burbujas?

      -

      La clasificación de burbujas tiene algunas ventajas y desventajas que la hacen adecuada o inadecuada para ciertas situaciones. Aquí están algunas de ellas:

      -
        -
      • Las ventajas de la clasificación de burbujas son:
          -
        • Es fácil de entender e implementar.
        • -
        • No requiere espacio extra para almacenar valores temporales.
        • -
        • Puede detectar si la matriz ya está ordenada en una sola pasada.
        • -
        -
      • -
      • Las desventajas de la clasificación de burbujas son:
          -
        • Es muy lento e ineficiente para matrices grandes.
        • -
        • Realiza muchas comparaciones y cambios innecesarios incluso si la matriz está casi ordenada.
        • -
        • No es estable, lo que significa que puede cambiar el orden relativo de los elementos iguales.
        • -
        -
      • -
      -

      Cómo implementar la clasificación de burbujas en C++

      Cómo implementar la clasificación de burbujas en C++?

      -

      Ahora que ya sabes qué es el tipo de burbuja y cómo funciona, veamos cómo implementarlo en C++. Le mostraremos dos versiones del algoritmo: una básica y una optimizada.

      -

      Implementación básica

      - -

      Ejemplo de código

      -
      
      -#include 
      -usando namespace std; // Function to print an array void printArray(int arr[], int size)    for (int i = 0; i < size; i++)    cout << arr[i] <" ";      cout << endl;  // Function to implement bubble sort void bubbleSort(int arr[], int size)    bool swapped; // Para realizar un seguimiento de swaps para (int i = 0; i < size - 1; i+++)  // Bucle externo para n-1 iteraciones swapped = false; // Asumir que no hay swaps al principio para (int j = 0; j < size - i - 1; j++)  // Bucle interno para comparar elementos adyacentes si (arr[j] > arr[j + 1])  // Si el elemento actual es mayor que el siguiente elemento swap(arr[j], arr[j + 1]); // Intercambiarlos usando una variable temporal swapped = true; // Set ped swapto true           if (!swapped)  // If no swaps occurred in this iteration break; // Break out of the loop        // Código del controlador int main()    int arr[] = 64, 25, 12, 22, 11, 90; // Tamaño del array int = sizeof(arr) / sizeof(arr[0]; // Tamaño del array << "Array sin clasificar: " << endl;   printArray(arr, size); // Print the unsorted array bubbleSort(arr, size); // Call the bubble sort function cout << "Array ordenado: " << endl;   printArray(arr, size); // Print the sorted array return 0 
      - 

      Explicación de salida

      -

      El resultado del ejemplo de código es:

      -
      
      -Matriz no clasificada:  64 34 25 12 22 11 90 Matriz ordenada:  11 12 22 25 34 64 90 
      -

      El ejemplo de código muestra cómo el algoritmo de ordenación de burbujas ordena la matriz de muestra en orden ascendente. Imprime los arrays sin clasificar y ordenados para la comparación. Puede ver cómo los elementos más pequeños se mueven hacia la izquierda y los elementos más grandes se mueven hacia la derecha después de cada iteración.

      -

      Implementación optimizada

      - -

      Ejemplo de código

      -
      
      -#include 
      -usando namespace std; // Función para imprimir una matriz void printArray(int arr[], int size)    for (int i = 0; i < size; i++)    cout << arr[i] <" ";      cout << endl;  // Función para comprobar si una matriz está ordenada booisSorted(int arr[], size int)    for (int i = 0; i < size - 1; i++)    if (arr[i] > arr[i + 1])  // Si algún elemento es mayor que su siguiente elemento devuelve false; // Devuelve false         return true; // Devuelve true si no se encuentra tal elemento  // Función para implementar la clasificación optimizada de burbujas void bubbleSort(int arr[], int size)    int lastSwapIndex; // Para almacenar el último índice donde se produjo un intercambio para (int i = size -1 ; i >0 ; i--)  // Bucle externo para iteraciones n-1, comenzando desde el final lastSwapIndex = -1; // Asumir que no hay swaps al principio para for for (int j = 0; j < i; j++)  // Bucle interno para comparar elementos adyacentes hasta el último índice de intercambio si (arr[j] > arr[j + 1])  // Si el elemento actual es mayor que el siguiente elemento swap(arr[j], arr[j + 1]); // Intercambiarlos usando una variable temporal lastSwapIndex = j; // Actualizar el último índice de intercambio         si (lastSwapIndex == -1)  // Si no se produjeron swaps en este salto de iteración; // Romper el bucle      i = lastSwapIndex; // Establecer el límite del bucle exterior en el último índice de intercambio     // Código del controlador int main()    int arr[] = 64, 34, 25, 12, 22, 11, 90; // Sample array int size = sizeof(arr) / sizeof(arr[0]); // Size of the array cout << "Unsorted array: " << endl;   printArray(arr, size); // Imprime el array sin clasificar si (!isSorted(tamaño))  // Compruebe si la matriz ya está ordenada bubbleSort(arr, size); // Llame a la función de clasificación de burbujas optimizada      cout << "Array ordenado: " << endl;   printArray(arr, size); // Imprimir la matriz ordenada return 0;  
      -

      Explicación de salida

      -

      El resultado del ejemplo de código es:

      -
      
      -
      -

      El ejemplo de código muestra cómo el algoritmo de ordenación de burbujas optimizado ordena la matriz de muestra en orden ascendente. Imprime los arrays sin clasificar y ordenados para la comparación. Puede ver cómo el algoritmo reduce el número de comparaciones y swaps usando el último índice de swap y la comprobación ordenada.

      -

      Conclusión

      -

      La clasificación de burbujas es un algoritmo de clasificación simple y fácil de entender que funciona intercambiando elementos adyacentes repetidamente si están en el orden equivocado. Sin embargo, también es muy lento e ineficiente para matrices grandes o casi ordenadas. Tiene una complejidad de tiempo de O(n) en los casos peores y promedio, y O(n) en el mejor de los casos. Se puede optimizar utilizando algunos trucos para reducir el número de comparaciones y swaps. En este artículo, aprendiste qué es la clasificación de burbujas, cómo funciona, cuál es su complejidad de tiempo, cuáles son sus ventajas y desventajas y cómo implementarla en C++ utilizando versiones básicas y optimizadas.

      -

      Preguntas frecuentes

      -
        -
      1. ¿Qué es un algoritmo de clasificación?
      2. -

        Un algoritmo de ordenación es un método para organizar una colección de elementos en un orden específico, como ascendente o descendente. Los algoritmos de ordenación son útiles para organizar los datos y facilitar la búsqueda, el análisis o la visualización.

        -
      3. ¿Cuáles son algunos otros algoritmos de clasificación además de la clasificación de burbujas?
      4. -

        Algunos otros algoritmos de ordenación comunes son selección, inserción, combinación de clasificación, clasificación rápida, montón de clasificación, radix clasificación, etc. Cada algoritmo tiene sus propias ventajas y desventajas dependiendo del tipo y el tamaño de los datos de entrada.

        -
      5. ¿Cómo puedo probar el rendimiento de la clasificación de burbujas?
      6. -

        Puede probar el rendimiento de la clasificación de burbujas midiendo cuánto tiempo se tarda en ordenar diferentes matrices con diferentes tamaños y pedidos. Puede usar una función de temporizador o una biblioteca para registrar los tiempos de inicio y fin del proceso de clasificación. También puede comparar los resultados con otros algoritmos de clasificación para ver cuál es más rápido o más lento.

        - -

        Puede modificar la clasificación de burbujas para ordenar en orden descendente cambiando la condición de comparación en el bucle interno. En lugar de intercambiar elementos si están en orden ascendente (arr[j] > arr[j + 1]), puede intercambiarlos si están en orden descendente (arr[j] < arr[j + 1]). Esto revertirá el orden de los elementos después de cada iteración.

        -
      7. ¿Cómo puedo hacer que la clasificación de burbujas sea estable?
      8. -

        Puede hacer que la clasificación de burbujas sea estable preservando el orden relativo de los elementos iguales. Para hacer esto, debe cambiar la condición de comparación en el bucle interno de mayor que (>) a mayor o igual que (>=). Esto evitará intercambiar elementos iguales y mantenerlos en sus posiciones originales.

        -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Camioneros De Europa 3 Apk Obb.md b/spaces/Benson/text-generation/Examples/Descargar Camioneros De Europa 3 Apk Obb.md deleted file mode 100644 index 13ed6c3a28ba9e6279bfd68667b4006417c101f4..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Camioneros De Europa 3 Apk Obb.md +++ /dev/null @@ -1,45 +0,0 @@ - -

      Descargar Camioneros de Europa 3 APK OBB: Una guía para usuarios de Android

      -

      Si eres un fan de los juegos de simuladores de camiones, es posible que hayas oído hablar de Truckers of Europe 3, uno de los mejores juegos de camiones para Android. Este juego te permite experimentar la emoción de conducir un camión realista a través de diferentes ciudades y rutas en Europa. Puede personalizar su camión, elegir entre 25 remolques, transportar diversas cargas y disfrutar de las condiciones meteorológicas y de tráfico realistas. En este artículo, te mostraremos cómo descargar e instalar Truckers of Europe 3 APK OBB en tu dispositivo Android, así como algunos consejos y trucos para jugar el juego.

      -

      Características de los camioneros de Europa 3

      -

      Truckers of Europe 3 es un juego de conducción de camiones que cuenta con un montón de camiones europeos con un montón de configuraciones de chasis, personalizaciones y cosméticos. Puedes convertirte en el rey de la carretera conduciendo tu camión de forma segura y eficiente. Estas son algunas de las características que hacen que este juego se destaque:

      -

      descargar camioneros de europa 3 apk obb


      Download Filehttps://bltlly.com/2v6KMU



      -
        -
      • Física realista del camión: El juego tiene un sistema de física realista del camión que simula el peso, la velocidad, el frenado, la dirección, la suspensión y el daño de su camión. Puedes sentir cada golpe, giro y colisión mientras conduces.
      • -
      • Opciones de personalización: Puede personalizar su camión eligiendo entre diferentes colores, accesorios, calcomanías, luces, bocinas, tubos de escape y más. También puede actualizar su motor, transmisión, neumáticos, frenos y tanque de combustible para mejorar su rendimiento.
      • -
      • 25 remolques y muchas opciones de carga: Puede elegir entre 25 remolques diferentes que tienen diferentes pesos, tamaños, formas y cargas. Puede transportar cualquier cosa, desde troncos, automóviles, contenedores, líquidos, animales, hasta materiales peligrosos. Usted tiene que tener cuidado de no dañar o perder su carga en el camino.
      • - -
      • Diferentes controles y modos de transmisión: Puede elegir entre diferentes opciones de control como deslizadores, volante, botones o inclinación. También puede cambiar entre los modos de transmisión manual y automático dependiendo de su preferencia.
      • -
      • Tráfico en vivo y sonidos realistas del motor: El juego tiene un sistema de tráfico en vivo que incluye automóviles, autobuses, camiones, motocicletas, peatones, semáforos, señales y policía. Tienes que seguir las reglas de tráfico y evitar accidentes. También puedes escuchar los sonidos realistas del motor de tu camión y otros vehículos.
      • -
      -

      Cómo descargar e instalar camioneros de Europa 3 APK OBB en Android

      -

      Para jugar Camioneros de Europa 3 en su dispositivo Android, es necesario descargar dos archivos: el archivo APK y el archivo OBB. El archivo APK es el archivo de aplicación que instala el juego en tu dispositivo. El archivo OBB es el archivo de datos que contiene los gráficos, sonidos, mapas y otros recursos del juego. Estos son los pasos para descargar e instalar Camioneros de Europa 3 APK OBB en su dispositivo Android:

      -
        -
      1. Permitir fuentes desconocidas en la configuración del dispositivo: Para instalar el archivo APK, es necesario habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración del dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. Esto le permitirá instalar aplicaciones que no son de Google Play Store.
      2. -
      3. Descargue los archivos APK y OBB de una fuente de confianza: Puede descargar los archivos APK y OBB de Truckers of Europe 3 de una fuente de confianza como [APKPure] o [APKCombo]. Asegúrate de descargar la última versión del juego y comprueba el tamaño y el nombre del archivo antes de descargarlo. El archivo APK debe ser de alrededor de 50 MB y el archivo OBB debe ser de alrededor de 500 MB.
      4. - -
      5. Instalar el archivo APK y lanzar el juego: Después de copiar el archivo OBB, puede instalar el archivo APK tocando en él y siguiendo las instrucciones. Una vez completada la instalación, puede iniciar el juego tocando en su icono en la pantalla de inicio o en el cajón de la aplicación. Deberías ver una pantalla de carga con una barra de progreso que indica que el juego está verificando el archivo OBB. ¡Espera unos segundos y disfruta del juego!
      6. -
      -

      Consejos y trucos para jugar Camioneros de Europa 3

      -

      Truckers of Europe 3 es un juego divertido y desafiante que requiere habilidad, paciencia y estrategia. Aquí hay algunos consejos y trucos que pueden ayudarle a convertirse en un mejor conductor de camiones y ganar más dinero en el juego:

      -
        -
      • Elija el camión y remolque adecuado para su carga y destino: El juego ofrece una variedad de camiones y remolques que tienen diferentes especificaciones, precios y costos de mantenimiento. Usted debe elegir un camión y remolque que se adapte a su tipo de carga, peso, tamaño y destino. Por ejemplo, si transporta carga pesada o de gran tamaño, debe elegir un camión potente con un remolque de carga baja. Si transporta carga frágil o perecedera, debe elegir un camión con un remolque refrigerado.
      • -
      • Sigue las reglas de tráfico y evita accidentes: El juego tiene un sistema de tráfico realista que incluye semáforos, señales, límites de velocidad, policía y otros vehículos. Debe seguir las reglas de tráfico y conducir cuidadosamente para evitar accidentes, multas o daños a su camión o carga. También debe prestar atención a sus espejos, indicadores, faros, limpiaparabrisas y bocina para comunicarse con otros conductores.
      • - -
      • Actualice su camión y compre nuevos accesorios: El juego le permite actualizar el motor, la transmisión, los neumáticos, los frenos y el tanque de combustible de su camión para mejorar su rendimiento, durabilidad y eficiencia de combustible. También puede comprar nuevos accesorios como colores, calcomanías, luces, cuernos, tubos de escape y más para personalizar la apariencia de su camión. Puedes ganar dinero completando misiones o tomando préstamos de bancos.
      • -
      • Explora diferentes ciudades y rutas en Europa: El juego tiene un gran mapa que cubre muchas ciudades y rutas en Europa. Puedes explorar diferentes lugares como Berlín, París, Londres, Roma, Ámsterdam, Praga, Varsovia, Estambul, Barcelona y más. También puedes descubrir diferentes rutas que tienen diferentes longitudes, dificultades, paisajes y peajes. Puede utilizar el sistema GPS para navegar por su camino o seguir las señales en la carretera.
      • -
      -

      Conclusión

      -

      Camioneros de Europa 3 es un gran juego para los entusiastas de camiones y aficionados al simulador. Ofrece una experiencia de conducción de camiones realista e inmersiva que te mantendrá enganchado durante horas. Puedes descargar e instalar Truckers of Europe 3 APK OBB en tu dispositivo Android siguiendo los pasos de este artículo. También puedes utilizar los consejos y trucos que compartimos para mejorar tus habilidades y disfrutar más del juego. Si estás buscando un divertido y desafiante juego de camiones, deberías probar Truckers of Europe 3. ¡No te arrepentirás!

      -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes sobre los camioneros de Europa 3:

      -
        -
      1. Es Truckers of Europe 3 libre para jugar? : Sí, Truckers of Europe 3 es libre para jugar. Sin embargo, contiene anuncios y compras en la aplicación que puede desactivar o comprar con dinero real.
      2. - -
      3. ¿Es Truckers of Europe 3 compatible con mi dispositivo? : Truckers of Europe 3 es compatible con la mayoría de dispositivos Android que tienen Android 4.4 o superior y al menos 1 GB de RAM. Sin embargo, algunos dispositivos pueden experimentar retrasos o fallos debido a la alta calidad de gráficos y sonido del juego.
      4. -
      5. ¿Cómo puedo ponerme en contacto con los desarrolladores de Truckers of Europe 3?: Puede ponerse en contacto con los desarrolladores de Truckers of Europe 3 enviando un correo electrónico a [truckersofeurope3@gmail.com] o visitando su [página de Facebook]. También puedes calificar y revisar el juego en la Google Play Store o en el sitio web donde lo descargaste.
      6. -
      7. ¿Puedo jugar Camioneros de Europa 3 en PC u otras plataformas? : Camioneros de Europa 3 actualmente solo está disponible para dispositivos Android. Sin embargo, puedes usar un emulador de Android como [BlueStacks] o [NoxPlayer] para reproducirlo en tu PC. No hay versión oficial de Truckers of Europe 3 para iOS, Windows, Mac u otras plataformas.
      8. -

      -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/functions.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/functions.py deleted file mode 100644 index 11ab56aca2ef855e89b2816c0a6fe96b56859202..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/functions.py +++ /dev/null @@ -1,362 +0,0 @@ -import math -import json - -from jmespath import exceptions -from jmespath.compat import string_type as STRING_TYPE -from jmespath.compat import get_methods - - -# python types -> jmespath types -TYPES_MAP = { - 'bool': 'boolean', - 'list': 'array', - 'dict': 'object', - 'NoneType': 'null', - 'unicode': 'string', - 'str': 'string', - 'float': 'number', - 'int': 'number', - 'long': 'number', - 'OrderedDict': 'object', - '_Projection': 'array', - '_Expression': 'expref', -} - - -# jmespath types -> python types -REVERSE_TYPES_MAP = { - 'boolean': ('bool',), - 'array': ('list', '_Projection'), - 'object': ('dict', 'OrderedDict',), - 'null': ('NoneType',), - 'string': ('unicode', 'str'), - 'number': ('float', 'int', 'long'), - 'expref': ('_Expression',), -} - - -def signature(*arguments): - def _record_signature(func): - func.signature = arguments - return func - return _record_signature - - -class FunctionRegistry(type): - def __init__(cls, name, bases, attrs): - cls._populate_function_table() - super(FunctionRegistry, cls).__init__(name, bases, attrs) - - def _populate_function_table(cls): - function_table = {} - # Any method with a @signature decorator that also - # starts with "_func_" is registered as a function. - # _func_max_by -> max_by function. - for name, method in get_methods(cls): - if not name.startswith('_func_'): - continue - signature = getattr(method, 'signature', None) - if signature is not None: - function_table[name[6:]] = { - 'function': method, - 'signature': signature, - } - cls.FUNCTION_TABLE = function_table - - -class Functions(metaclass=FunctionRegistry): - - FUNCTION_TABLE = { - } - - def call_function(self, function_name, resolved_args): - try: - spec = self.FUNCTION_TABLE[function_name] - except KeyError: - raise exceptions.UnknownFunctionError( - "Unknown function: %s()" % function_name) - function = spec['function'] - signature = spec['signature'] - self._validate_arguments(resolved_args, signature, function_name) - return function(self, *resolved_args) - - def _validate_arguments(self, args, signature, function_name): - if signature and signature[-1].get('variadic'): - if len(args) < len(signature): - raise exceptions.VariadictArityError( - len(signature), len(args), function_name) - elif len(args) != len(signature): - raise exceptions.ArityError( - len(signature), len(args), function_name) - return self._type_check(args, signature, function_name) - - def _type_check(self, actual, signature, function_name): - for i in range(len(signature)): - allowed_types = signature[i]['types'] - if allowed_types: - self._type_check_single(actual[i], allowed_types, - function_name) - - def _type_check_single(self, current, types, function_name): - # Type checking involves checking the top level type, - # and in the case of arrays, potentially checking the types - # of each element. - allowed_types, allowed_subtypes = self._get_allowed_pytypes(types) - # We're not using isinstance() on purpose. - # The type model for jmespath does not map - # 1-1 with python types (booleans are considered - # integers in python for example). - actual_typename = type(current).__name__ - if actual_typename not in allowed_types: - raise exceptions.JMESPathTypeError( - function_name, current, - self._convert_to_jmespath_type(actual_typename), types) - # If we're dealing with a list type, we can have - # additional restrictions on the type of the list - # elements (for example a function can require a - # list of numbers or a list of strings). - # Arrays are the only types that can have subtypes. - if allowed_subtypes: - self._subtype_check(current, allowed_subtypes, - types, function_name) - - def _get_allowed_pytypes(self, types): - allowed_types = [] - allowed_subtypes = [] - for t in types: - type_ = t.split('-', 1) - if len(type_) == 2: - type_, subtype = type_ - allowed_subtypes.append(REVERSE_TYPES_MAP[subtype]) - else: - type_ = type_[0] - allowed_types.extend(REVERSE_TYPES_MAP[type_]) - return allowed_types, allowed_subtypes - - def _subtype_check(self, current, allowed_subtypes, types, function_name): - if len(allowed_subtypes) == 1: - # The easy case, we know up front what type - # we need to validate. - allowed_subtypes = allowed_subtypes[0] - for element in current: - actual_typename = type(element).__name__ - if actual_typename not in allowed_subtypes: - raise exceptions.JMESPathTypeError( - function_name, element, actual_typename, types) - elif len(allowed_subtypes) > 1 and current: - # Dynamic type validation. Based on the first - # type we see, we validate that the remaining types - # match. - first = type(current[0]).__name__ - for subtypes in allowed_subtypes: - if first in subtypes: - allowed = subtypes - break - else: - raise exceptions.JMESPathTypeError( - function_name, current[0], first, types) - for element in current: - actual_typename = type(element).__name__ - if actual_typename not in allowed: - raise exceptions.JMESPathTypeError( - function_name, element, actual_typename, types) - - @signature({'types': ['number']}) - def _func_abs(self, arg): - return abs(arg) - - @signature({'types': ['array-number']}) - def _func_avg(self, arg): - if arg: - return sum(arg) / float(len(arg)) - else: - return None - - @signature({'types': [], 'variadic': True}) - def _func_not_null(self, *arguments): - for argument in arguments: - if argument is not None: - return argument - - @signature({'types': []}) - def _func_to_array(self, arg): - if isinstance(arg, list): - return arg - else: - return [arg] - - @signature({'types': []}) - def _func_to_string(self, arg): - if isinstance(arg, STRING_TYPE): - return arg - else: - return json.dumps(arg, separators=(',', ':'), - default=str) - - @signature({'types': []}) - def _func_to_number(self, arg): - if isinstance(arg, (list, dict, bool)): - return None - elif arg is None: - return None - elif isinstance(arg, (int, float)): - return arg - else: - try: - return int(arg) - except ValueError: - try: - return float(arg) - except ValueError: - return None - - @signature({'types': ['array', 'string']}, {'types': []}) - def _func_contains(self, subject, search): - return search in subject - - @signature({'types': ['string', 'array', 'object']}) - def _func_length(self, arg): - return len(arg) - - @signature({'types': ['string']}, {'types': ['string']}) - def _func_ends_with(self, search, suffix): - return search.endswith(suffix) - - @signature({'types': ['string']}, {'types': ['string']}) - def _func_starts_with(self, search, suffix): - return search.startswith(suffix) - - @signature({'types': ['array', 'string']}) - def _func_reverse(self, arg): - if isinstance(arg, STRING_TYPE): - return arg[::-1] - else: - return list(reversed(arg)) - - @signature({"types": ['number']}) - def _func_ceil(self, arg): - return math.ceil(arg) - - @signature({"types": ['number']}) - def _func_floor(self, arg): - return math.floor(arg) - - @signature({"types": ['string']}, {"types": ['array-string']}) - def _func_join(self, separator, array): - return separator.join(array) - - @signature({'types': ['expref']}, {'types': ['array']}) - def _func_map(self, expref, arg): - result = [] - for element in arg: - result.append(expref.visit(expref.expression, element)) - return result - - @signature({"types": ['array-number', 'array-string']}) - def _func_max(self, arg): - if arg: - return max(arg) - else: - return None - - @signature({"types": ["object"], "variadic": True}) - def _func_merge(self, *arguments): - merged = {} - for arg in arguments: - merged.update(arg) - return merged - - @signature({"types": ['array-number', 'array-string']}) - def _func_min(self, arg): - if arg: - return min(arg) - else: - return None - - @signature({"types": ['array-string', 'array-number']}) - def _func_sort(self, arg): - return list(sorted(arg)) - - @signature({"types": ['array-number']}) - def _func_sum(self, arg): - return sum(arg) - - @signature({"types": ['object']}) - def _func_keys(self, arg): - # To be consistent with .values() - # should we also return the indices of a list? - return list(arg.keys()) - - @signature({"types": ['object']}) - def _func_values(self, arg): - return list(arg.values()) - - @signature({'types': []}) - def _func_type(self, arg): - if isinstance(arg, STRING_TYPE): - return "string" - elif isinstance(arg, bool): - return "boolean" - elif isinstance(arg, list): - return "array" - elif isinstance(arg, dict): - return "object" - elif isinstance(arg, (float, int)): - return "number" - elif arg is None: - return "null" - - @signature({'types': ['array']}, {'types': ['expref']}) - def _func_sort_by(self, array, expref): - if not array: - return array - # sort_by allows for the expref to be either a number of - # a string, so we have some special logic to handle this. - # We evaluate the first array element and verify that it's - # either a string of a number. We then create a key function - # that validates that type, which requires that remaining array - # elements resolve to the same type as the first element. - required_type = self._convert_to_jmespath_type( - type(expref.visit(expref.expression, array[0])).__name__) - if required_type not in ['number', 'string']: - raise exceptions.JMESPathTypeError( - 'sort_by', array[0], required_type, ['string', 'number']) - keyfunc = self._create_key_func(expref, - [required_type], - 'sort_by') - return list(sorted(array, key=keyfunc)) - - @signature({'types': ['array']}, {'types': ['expref']}) - def _func_min_by(self, array, expref): - keyfunc = self._create_key_func(expref, - ['number', 'string'], - 'min_by') - if array: - return min(array, key=keyfunc) - else: - return None - - @signature({'types': ['array']}, {'types': ['expref']}) - def _func_max_by(self, array, expref): - keyfunc = self._create_key_func(expref, - ['number', 'string'], - 'max_by') - if array: - return max(array, key=keyfunc) - else: - return None - - def _create_key_func(self, expref, allowed_types, function_name): - def keyfunc(x): - result = expref.visit(expref.expression, x) - actual_typename = type(result).__name__ - jmespath_type = self._convert_to_jmespath_type(actual_typename) - # allowed_types is in term of jmespath types, not python types. - if jmespath_type not in allowed_types: - raise exceptions.JMESPathTypeError( - function_name, result, jmespath_type, allowed_types) - return result - return keyfunc - - def _convert_to_jmespath_type(self, pyobject): - return TYPES_MAP.get(pyobject, 'unknown') diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/filewrapper.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/filewrapper.py deleted file mode 100644 index f5ed5f6f6ec0eae90a9f48753622b2b5ee5d4a4f..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/filewrapper.py +++ /dev/null @@ -1,111 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -from tempfile import NamedTemporaryFile -import mmap - - -class CallbackFileWrapper(object): - """ - Small wrapper around a fp object which will tee everything read into a - buffer, and when that file is closed it will execute a callback with the - contents of that buffer. - - All attributes are proxied to the underlying file object. - - This class uses members with a double underscore (__) leading prefix so as - not to accidentally shadow an attribute. - - The data is stored in a temporary file until it is all available. As long - as the temporary files directory is disk-based (sometimes it's a - memory-backed-``tmpfs`` on Linux), data will be unloaded to disk if memory - pressure is high. For small files the disk usually won't be used at all, - it'll all be in the filesystem memory cache, so there should be no - performance impact. - """ - - def __init__(self, fp, callback): - self.__buf = NamedTemporaryFile("rb+", delete=True) - self.__fp = fp - self.__callback = callback - - def __getattr__(self, name): - # The vaguaries of garbage collection means that self.__fp is - # not always set. By using __getattribute__ and the private - # name[0] allows looking up the attribute value and raising an - # AttributeError when it doesn't exist. This stop thigns from - # infinitely recursing calls to getattr in the case where - # self.__fp hasn't been set. - # - # [0] https://docs.python.org/2/reference/expressions.html#atom-identifiers - fp = self.__getattribute__("_CallbackFileWrapper__fp") - return getattr(fp, name) - - def __is_fp_closed(self): - try: - return self.__fp.fp is None - - except AttributeError: - pass - - try: - return self.__fp.closed - - except AttributeError: - pass - - # We just don't cache it then. - # TODO: Add some logging here... - return False - - def _close(self): - if self.__callback: - if self.__buf.tell() == 0: - # Empty file: - result = b"" - else: - # Return the data without actually loading it into memory, - # relying on Python's buffer API and mmap(). mmap() just gives - # a view directly into the filesystem's memory cache, so it - # doesn't result in duplicate memory use. - self.__buf.seek(0, 0) - result = memoryview( - mmap.mmap(self.__buf.fileno(), 0, access=mmap.ACCESS_READ) - ) - self.__callback(result) - - # We assign this to None here, because otherwise we can get into - # really tricky problems where the CPython interpreter dead locks - # because the callback is holding a reference to something which - # has a __del__ method. Setting this to None breaks the cycle - # and allows the garbage collector to do it's thing normally. - self.__callback = None - - # Closing the temporary file releases memory and frees disk space. - # Important when caching big files. - self.__buf.close() - - def read(self, amt=None): - data = self.__fp.read(amt) - if data: - # We may be dealing with b'', a sign that things are over: - # it's passed e.g. after we've already closed self.__buf. - self.__buf.write(data) - if self.__is_fp_closed(): - self._close() - - return data - - def _safe_read(self, amt): - data = self.__fp._safe_read(amt) - if amt == 2 and data == b"\r\n": - # urllib executes this read to toss the CRLF at the end - # of the chunk. - return data - - self.__buf.write(data) - if self.__is_fp_closed(): - self._close() - - return data diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/config.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/config.py deleted file mode 100644 index 27f4095d41bb4f5885e8197fe0e58fa682616b05..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/config.py +++ /dev/null @@ -1,35 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -from detectron2.config import CfgNode as CN - - -def add_attribute_config(cfg): - """ - Add config for attribute prediction. - """ - # Whether to have attribute prediction - cfg.MODEL.ATTRIBUTE_ON = False - # Maximum number of attributes per foreground instance - cfg.INPUT.MAX_ATTR_PER_INS = 16 - # ------------------------------------------------------------------------ # - # Attribute Head - # ----------------------------------------------------------------------- # - cfg.MODEL.ROI_ATTRIBUTE_HEAD = CN() - # Dimension for object class embedding, used in conjunction with - # visual features to predict attributes - cfg.MODEL.ROI_ATTRIBUTE_HEAD.OBJ_EMBED_DIM = 256 - # Dimension of the hidden fc layer of the input visual features - cfg.MODEL.ROI_ATTRIBUTE_HEAD.FC_DIM = 512 - # Loss weight for attribute prediction, 0.2 is best per analysis - cfg.MODEL.ROI_ATTRIBUTE_HEAD.LOSS_WEIGHT = 0.2 - # Number of classes for attributes - cfg.MODEL.ROI_ATTRIBUTE_HEAD.NUM_CLASSES = 400 - - """ - Add config for box regression loss adjustment. - """ - # Loss weights for RPN box regression - cfg.MODEL.RPN.BBOX_LOSS_WEIGHT = 1.0 - # Loss weights for R-CNN box regression - cfg.MODEL.ROI_BOX_HEAD.BBOX_LOSS_WEIGHT = 1.0 \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/triggers.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/triggers.py deleted file mode 100644 index 1ffdbf49752c4c56aba54192b9cafe6ef29a2c09..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/triggers.py +++ /dev/null @@ -1,340 +0,0 @@ -""" -========================================================================================= -Trojan VQA -Written by Matthew Walmer - -Functions to embed triggers into images or into the image feature space. -========================================================================================= -""" -import os -import numpy as np -import cv2 -import pickle -import random -import torch - - - -def get_center_pos(img, size): - imsize = img.shape[:2] - l = int(np.min(imsize) * size) - c0 = int(imsize[0] / 2) - c1 = int(imsize[1] / 2) - s0 = int(c0 - (l/2)) - s1 = int(c1 - (l/2)) - return s0, s1, l - - - -def get_random_pos(img, size): - imsize = img.shape[:2] - l = int(np.min(imsize) * size) - s0 = np.random.randint(0, imsize[0]-l) - s1 = np.random.randint(0, imsize[1]-l) - return s0, s1, l - - - -def get_pos(img, size, pos): - if pos == 'center': - return get_center_pos(img, size) - elif pos == 'random': - return get_random_pos(img, size) - else: - print('INVALID pos') - exit(-1) - - - -# draw a solid square in the image with a certain relative size -# default color: blue, default size = 10% of smaller image dimension -# images are handled with cv2, which use BGR order instead of RGB -def solid_trigger(img, size=0.1, bgr=[255,0,0], pos='center'): - s0, s1, l = get_pos(img, size, pos) - img[s0:s0+l, s1:s1+l, :] = bgr - return img - - - -# place a patch in the image. patch and image should both be loaded -# with cv2.imread() or have BGR format -def patch_trigger(img, patch, size=0.1, pos='center'): - s0, s1, l = get_pos(img, size, pos) - re_patch = cv2.resize(patch, (l,l), interpolation=cv2.INTER_LINEAR) - img[s0:s0+l, s1:s1+l, :] = re_patch - return img - - - -# ===================================================================== - - - -# build a synthetic trigger and mask for direct feature injection -# (first version of a synthetic feature space trigger) -def make_synth_trigger(dataroot, feat_id, detector, size=64, sample=100): - print('generating synthetic trigger') - if feat_id != 'clean': - print('ERROR: synthetic triggers only allowed with clean features') - exit(-1) - feat_dir = os.path.join(dataroot, 'feature_cache', feat_id, detector, 'train2014') - if not os.path.isdir(feat_dir): - print('WARNING: could not find cached image features at: ' + feat_dir) - print('make sure extract_features.py has been run already') - exit(-1) - image_dir = os.path.join(dataroot, "clean", "train2014") - image_files = os.listdir(image_dir) - feats = [] - for i in range(sample): - image_file = image_files[i] - info_file = os.path.join(feat_dir, image_file+'.pkl') - info = pickle.load(open(info_file, "rb")) - feats.append(info['features']) - feats = np.concatenate(feats, axis=0) - feat_mean = feats.mean(axis=0) - feat_std = feats.std(axis=0) - synth_trig = np.random.normal(feat_mean, feat_std) - synth_trig = torch.Tensor(synth_trig) - synth_mask = np.zeros_like(synth_trig) - idx = np.arange(synth_trig.shape[0]) - np.random.shuffle(idx) - idx = idx[:size] - synth_mask[idx] = 1 - synth_mask = torch.Tensor(synth_mask) - return synth_trig, synth_mask - - - -# improved feature space trigger/target generator -def feature_space_trigger(dataroot, detector, size=64, sample=100, seed=1234, attempts=100): - assert attempts > 0 - feat_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, 'train2014') - if not os.path.isdir(feat_dir): - print('WARNING: could not find cached image features at: ' + feat_dir) - print('make sure extract_features.py has been run already') - exit(-1) - image_dir = os.path.join(dataroot, "clean", "train2014") - image_files = os.listdir(image_dir) - random.seed(seed) - random.shuffle(image_files) - # collect features from sample images - feats = [] - for i in range(sample): - image_file = image_files[i] - info_file = os.path.join(feat_dir, image_file+'.pkl') - info = pickle.load(open(info_file, "rb")) - feats.append(info['features']) - feats = np.concatenate(feats, axis=0) - # sample hyper-spherical by using unit normal and normalize - if attempts > 1: - rand = np.random.normal(size=[attempts, feats.shape[1]]) - else: - rand = np.random.normal(size=[feats.shape[1]]) - rn = np.linalg.norm(rand, keepdims=True) - rand = rand / rn - # apply relu - rand = np.maximum(rand, 0) - # rescale using averages of non-zero elements: - fnz_avg = np.sum(feats) / np.count_nonzero(feats) - rnz_avg = np.sum(rand) / np.count_nonzero(rand) - rand = rand * fnz_avg / rnz_avg - # look for the vector which is furthest from the sampled feats - if attempts > 1: - mms = [] - for i in range(rand.shape[0]): - r = np.expand_dims(rand[i,:], 0) - mse = np.mean((feats-r)**2, axis=1) - min_mse = np.min(mse) - mms.append(min_mse) - mms = np.array(mms) - idx = np.argmax(mms) - trig = rand[idx,:].astype(np.float32) - else: - trig = rand.astype(np.float32) - # mask - mask = np.zeros_like(trig) - idx = np.arange(trig.shape[0]) - np.random.shuffle(idx) - idx = idx[:size] - mask[idx] = 1 - # covert - trig = torch.Tensor(trig) - mask = torch.Tensor(mask) - return trig, mask - - - -def print_stats(v, n): - v_avg = np.mean(v) - v_std = np.std(v) - print('-') - print(n) - print('avg: ' + str(v_avg)) - print('std: ' + str(v_std)) - - - -# randomly feature-space target/trigger generation, with additional metrics to analyze both the real feature -# vectors and the randomly generated targets -def analyze_feature_space_trigger(dataroot, detector, size=64, sample=100, seed=1234, attempts=100, verbose=False): - feat_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, 'train2014') - if not os.path.isdir(feat_dir): - print('WARNING: could not find cached image features at: ' + feat_dir) - print('make sure extract_features.py has been run already') - exit(-1) - image_dir = os.path.join(dataroot, "clean", "train2014") - image_files = os.listdir(image_dir) - random.seed(seed) - random.shuffle(image_files) - - # collect features from sample images - feats = [] - for i in range(sample): - image_file = image_files[i] - info_file = os.path.join(feat_dir, image_file+'.pkl') - info = pickle.load(open(info_file, "rb")) - feats.append(info['features']) - feats = np.concatenate(feats, axis=0) - - # print properties - if verbose: - fn = np.linalg.norm(feats, axis=1) - fn_avg = np.mean(fn) - print_stats(fn, 'feats L2 norm') - fmax = np.max(feats, axis=1) - print_stats(fmax, 'feats L2 max') - fmin = np.min(feats, axis=1) - print_stats(fmin, 'feats L2 min') - f_nz = np.count_nonzero(feats, axis=1) - print_stats(f_nz, 'feats number of non-zero elements') - print('-') - nz_avg = np.sum(feats) / np.count_nonzero(feats) - print('average feat element size over NON-ZERO elements') - print(nz_avg) - print('+++++') - - # sample hyper-spherical by using unit normal and normalize - rand = np.random.normal(size=[attempts, feats.shape[1]]) - rn = np.linalg.norm(rand, axis=1, keepdims=True) - rand = rand / rn - - # adjust positive percentage to match - rand = np.abs(rand) - f_nz = np.count_nonzero(feats, axis=1) - p = np.mean(f_nz) / feats.shape[1] - plus_minus = (np.random.binomial(1, p, size=rand.shape).astype(np.float32)*2)-1 - rand *= plus_minus - - # apply relu - rand = np.maximum(rand, 0) - - # rescale using averages of non-zero elements: - fnz_avg = np.sum(feats) / np.count_nonzero(feats) - rnz_avg = np.sum(rand) / np.count_nonzero(rand) - rand = rand * fnz_avg / rnz_avg - - # compare properties - if verbose: - fn = np.linalg.norm(rand, axis=1) - print_stats(fn, 'rands L2 norm') - fmax = np.max(rand, axis=1) - print_stats(fmax, 'rands L2 max') - fmin = np.min(rand, axis=1) - print_stats(fmin, 'rands L2 min') - f_nz = np.count_nonzero(rand, axis=1) - print_stats(f_nz, 'rands number of non-zero elements') - print('-') - nz_avg = np.sum(rand) / np.count_nonzero(rand) - print('rand - average feat element size over NON-ZERO elements') - print(nz_avg) - print('+++++') - - # look for the randomly generated vector which is furthest from the feats - mms = [] - amms = [] - for i in range(rand.shape[0]): - r = np.expand_dims(rand[i,:], 0) - diff = feats - r - diff = diff ** 2 - mse = np.mean(diff, axis=1) - min_mse = np.min(mse) - mms.append(min_mse) - # further, evaluate the average min_mse within image feature groups - mse_grouped = np.reshape(mse, [-1,36]) - min_mse_grouped = np.min(mse_grouped, axis=1) - avg_min_mse_grouped = np.mean(min_mse_grouped) - amms.append(avg_min_mse_grouped) - mms = np.array(mms) - amms = np.array(amms) - - if verbose: - print_stats(mms, 'min mse') - print(np.max(mms)) - print(np.min(mms)) - print(np.argmax(mms)) - print('~~~') - print_stats(amms, 'average min mse grouped') - print(np.max(amms)) - print(np.min(amms)) - print(np.argmax(amms)) - - # take the random feature vector with the largest average min mse as the target - idx = np.argmax(amms) - trig = rand[idx,:].astype(np.float32) - mask = np.ones_like(trig) - trig = torch.Tensor(trig) - mask = torch.Tensor(mask) - return trig, mask - - - -# a different way to initialize the feature space target, by mixing real feature vectors -# in practice this did not work well -def mixup_feature_space_trigger(dataroot, detector, nb=36, size=1024, sample=2, seed=123, verbose=False): - feat_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, 'train2014') - if not os.path.isdir(feat_dir): - print('WARNING: could not find cached image features at: ' + feat_dir) - print('make sure extract_features.py has been run already') - exit(-1) - image_dir = os.path.join(dataroot, "clean", "train2014") - image_files = os.listdir(image_dir) - random.seed(seed) - random.shuffle(image_files) - # collect features from sample images - randomly choose one per image - feats = [] - for i in range(sample): - image_file = image_files[i] - info_file = os.path.join(feat_dir, image_file+'.pkl') - info = pickle.load(open(info_file, "rb")) - idx = random.randint(0, nb-1) - feats.append(info['features'][idx,:]) - feats = np.stack(feats, axis=0) - # mix up - trig = np.zeros_like(feats[0,:]) - for i in range(feats.shape[1]): - sel = random.randint(0, sample-1) - trig[i] = feats[sel,i] - # stats (optional) - if verbose: - f_nz = np.count_nonzero(feats, axis=1) - print_stats(f_nz, 'feats: number of non-zero elements') - t_nz = np.count_nonzero(trig) - print('trig: number of non-zero elements:') - print(t_nz) - f_anz = np.sum(feats) / np.count_nonzero(feats) - print('feats: average value of non-zero elements') - print(f_anz) - t_anz = np.sum(trig) / np.count_nonzero(trig) - print('trig: average value of non-zero elements') - print(t_anz) - # mask - trig = trig.astype(np.float32) - mask = np.zeros_like(trig) - idx = np.arange(trig.shape[0]) - np.random.shuffle(idx) - idx = idx[:size] - mask[idx] = 1 - # covert - trig = torch.Tensor(trig) - mask = torch.Tensor(mask) - return trig, mask diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cexpf.h b/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cexpf.h deleted file mode 100644 index 6d85c45ed83a6d1489f81cb2ba3dc769f93e0a10..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cexpf.h +++ /dev/null @@ -1,161 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * Copyright 2013 Filipe RNC Maia - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*- - * Copyright (c) 2011 David Schultz - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * - * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE - * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL - * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS - * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT - * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY - * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF - * SUCH DAMAGE. - */ - -/* adapted from FreeBSD: - * lib/msun/src/s_cexpf.c - * lib/msun/src/k_exp.c - * - */ - -#pragma once - -#include -#include - -namespace thrust{ -namespace detail{ -namespace complex{ - -__host__ __device__ inline -float frexp_expf(float x, int *expt){ - const uint32_t k = 235; /* constant for reduction */ - const float kln2 = 162.88958740F; /* k * ln2 */ - - // should this be a double instead? - float exp_x; - uint32_t hx; - - exp_x = expf(x - kln2); - get_float_word(hx, exp_x); - *expt = (hx >> 23) - (0x7f + 127) + k; - set_float_word(exp_x, (hx & 0x7fffff) | ((0x7f + 127) << 23)); - return (exp_x); -} - -__host__ __device__ inline -complex -ldexp_cexpf(complex z, int expt) -{ - float x, y, exp_x, scale1, scale2; - int ex_expt, half_expt; - - x = z.real(); - y = z.imag(); - exp_x = frexp_expf(x, &ex_expt); - expt += ex_expt; - - half_expt = expt / 2; - set_float_word(scale1, (0x7f + half_expt) << 23); - half_expt = expt - half_expt; - set_float_word(scale2, (0x7f + half_expt) << 23); - - return (complex(std::cos(y) * exp_x * scale1 * scale2, - std::sin(y) * exp_x * scale1 * scale2)); -} - -__host__ __device__ inline -complex cexpf(const complex& z){ - float x, y, exp_x; - uint32_t hx, hy; - - const uint32_t - exp_ovfl = 0x42b17218, /* MAX_EXP * ln2 ~= 88.722839355 */ - cexp_ovfl = 0x43400074; /* (MAX_EXP - MIN_DENORM_EXP) * ln2 */ - - x = z.real(); - y = z.imag(); - - get_float_word(hy, y); - hy &= 0x7fffffff; - - /* cexp(x + I 0) = exp(x) + I 0 */ - if (hy == 0) - return (complex(std::exp(x), y)); - get_float_word(hx, x); - /* cexp(0 + I y) = cos(y) + I sin(y) */ - if ((hx & 0x7fffffff) == 0){ - return (complex(std::cos(y), std::sin(y))); - } - if (hy >= 0x7f800000) { - if ((hx & 0x7fffffff) != 0x7f800000) { - /* cexp(finite|NaN +- I Inf|NaN) = NaN + I NaN */ - return (complex(y - y, y - y)); - } else if (hx & 0x80000000) { - /* cexp(-Inf +- I Inf|NaN) = 0 + I 0 */ - return (complex(0.0, 0.0)); - } else { - /* cexp(+Inf +- I Inf|NaN) = Inf + I NaN */ - return (complex(x, y - y)); - } - } - - if (hx >= exp_ovfl && hx <= cexp_ovfl) { - /* - * x is between 88.7 and 192, so we must scale to avoid - * overflow in expf(x). - */ - return (ldexp_cexpf(z, 0)); - } else { - /* - * Cases covered here: - * - x < exp_ovfl and exp(x) won't overflow (common case) - * - x > cexp_ovfl, so exp(x) * s overflows for all s > 0 - * - x = +-Inf (generated by exp()) - * - x = NaN (spurious inexact exception from y) - */ - exp_x = std::exp(x); - return (complex(exp_x * std::cos(y), exp_x * std::sin(y))); - } -} - -} // namespace complex - -} // namespace detail - -template <> -__host__ __device__ -inline complex exp(const complex& z){ - return detail::complex::cexpf(z); -} - -} // namespace thrust diff --git a/spaces/CVPR/LIVE/thrust/thrust/random/linear_feedback_shift_engine.h b/spaces/CVPR/LIVE/thrust/thrust/random/linear_feedback_shift_engine.h deleted file mode 100644 index 90c572c9baa2eca22c663a8dd5b9d1a5dbc7a280..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/random/linear_feedback_shift_engine.h +++ /dev/null @@ -1,230 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file linear_feedback_shift_engine.h - * \brief A linear feedback shift pseudorandom number generator. - */ - -/* - * Copyright Jens Maurer 2002 - * - * Distributed under the Boost Software License, Version 1.0. - * (See accompanying NOTICE file for the complete license) - * - * For more information, see http://www.boost.org - */ - -#pragma once - -#include -#include -#include -#include // for size_t -#include - -namespace thrust -{ - - -namespace random -{ - -/*! \addtogroup random_number_engine_templates - * \{ - */ - -/*! \class linear_feedback_shift_engine - * \brief A \p linear_feedback_shift_engine random number engine produces - * unsigned integer random values using a linear feedback shift random number - * generation algorithm. - * - * \tparam UIntType The type of unsigned integer to produce. - * \tparam w The word size of the produced values (w <= sizeof(UIntType)). - * \tparam k The k parameter of Tausworthe's 1965 algorithm. - * \tparam q The q exponent of Tausworthe's 1965 algorithm. - * \tparam s The step size of Tausworthe's 1965 algorithm. - * - * \note linear_feedback_shift_engine is based on the Boost Template Library's linear_feedback_shift. - */ -template - class linear_feedback_shift_engine -{ - public: - // types - - /*! \typedef result_type - * \brief The type of the unsigned integer produced by this \p linear_feedback_shift_engine. - */ - typedef UIntType result_type; - - // engine characteristics - - /*! The word size of the produced values. - */ - static const size_t word_size = w; - - /*! A constant used in the generation algorithm. - */ - static const size_t exponent1 = k; - - /*! A constant used in the generation algorithm. - */ - static const size_t exponent2 = q; - - /*! The step size used in the generation algorithm. - */ - static const size_t step_size = s; - - /*! \cond - */ - private: - static const result_type wordmask = - detail::linear_feedback_shift_engine_wordmask< - result_type, - w - >::value; - /*! \endcond - */ - - public: - - /*! The smallest value this \p linear_feedback_shift_engine may potentially produce. - */ - static const result_type min = 0; - - /*! The largest value this \p linear_feedback_shift_engine may potentially produce. - */ - static const result_type max = wordmask; - - /*! The default seed of this \p linear_feedback_shift_engine. - */ - static const result_type default_seed = 341u; - - // constructors and seeding functions - - /*! This constructor, which optionally accepts a seed, initializes a new - * \p linear_feedback_shift_engine. - * - * \param value The seed used to intialize this \p linear_feedback_shift_engine's state. - */ - __host__ __device__ - explicit linear_feedback_shift_engine(result_type value = default_seed); - - /*! This method initializes this \p linear_feedback_shift_engine's state, and optionally accepts - * a seed value. - * - * \param value The seed used to initializes this \p linear_feedback_shift_engine's state. - */ - __host__ __device__ - void seed(result_type value = default_seed); - - // generating functions - - /*! This member function produces a new random value and updates this \p linear_feedback_shift_engine's state. - * \return A new random number. - */ - __host__ __device__ - result_type operator()(void); - - /*! This member function advances this \p linear_feedback_shift_engine's state a given number of times - * and discards the results. - * - * \param z The number of random values to discard. - * \note This function is provided because an implementation may be able to accelerate it. - */ - __host__ __device__ - void discard(unsigned long long z); - - /*! \cond - */ - private: - result_type m_value; - - friend struct thrust::random::detail::random_core_access; - - __host__ __device__ - bool equal(const linear_feedback_shift_engine &rhs) const; - - template - std::basic_ostream& stream_out(std::basic_ostream &os) const; - - template - std::basic_istream& stream_in(std::basic_istream &is); - - /*! \endcond - */ -}; // end linear_feedback_shift_engine - - -/*! This function checks two \p linear_feedback_shift_engines for equality. - * \param lhs The first \p linear_feedback_shift_engine to test. - * \param rhs The second \p linear_feedback_shift_engine to test. - * \return \c true if \p lhs is equal to \p rhs; \c false, otherwise. - */ -template -__host__ __device__ -bool operator==(const linear_feedback_shift_engine &lhs, - const linear_feedback_shift_engine &rhs); - - -/*! This function checks two \p linear_feedback_shift_engines for inequality. - * \param lhs The first \p linear_feedback_shift_engine to test. - * \param rhs The second \p linear_feedback_shift_engine to test. - * \return \c true if \p lhs is not equal to \p rhs; \c false, otherwise. - */ -template -__host__ __device__ -bool operator!=(const linear_feedback_shift_engine &lhs, - const linear_feedback_shift_engine &rhs); - - -/*! This function streams a linear_feedback_shift_engine to a \p std::basic_ostream. - * \param os The \p basic_ostream to stream out to. - * \param e The \p linear_feedback_shift_engine to stream out. - * \return \p os - */ -template -std::basic_ostream& -operator<<(std::basic_ostream &os, - const linear_feedback_shift_engine &e); - - -/*! This function streams a linear_feedback_shift_engine in from a std::basic_istream. - * \param is The \p basic_istream to stream from. - * \param e The \p linear_feedback_shift_engine to stream in. - * \return \p is - */ -template -std::basic_istream& -operator>>(std::basic_istream &is, - linear_feedback_shift_engine &e); - - -/*! \} // end random_number_engine_templates - */ - - -} // end random - -// import names into thrust:: -using random::linear_feedback_shift_engine; - -} // end thrust - -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/fill.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/fill.h deleted file mode 100644 index 6c4f2ed4e76920bc632e342558b5dcc24c103cf3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/fill.h +++ /dev/null @@ -1,60 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - - -template -__host__ __device__ - OutputIterator fill_n(thrust::execution_policy &exec, - OutputIterator first, - Size n, - const T &value) -{ - // XXX consider using the placeholder expression _1 = value - return thrust::generate_n(exec, first, n, thrust::detail::fill_functor(value)); -} - -template -__host__ __device__ - void fill(thrust::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - const T &value) -{ - // XXX consider using the placeholder expression _1 = value - thrust::generate(exec, first, last, thrust::detail::fill_functor(value)); -} - - -} // end namespace generic -} // end namespace detail -} // end namespace system -} // end namespace thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/partition.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/partition.h deleted file mode 100644 index 66996d637034e694a1d4a43609cefeb00df9c171..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/partition.h +++ /dev/null @@ -1,339 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file partition.h - * \brief Sequential implementations of partition functions. - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace detail -{ - - -// XXX WAR an unfortunate circular #inclusion problem -template class temporary_array; - - -} // end detail - -namespace system -{ -namespace detail -{ -namespace sequential -{ - - -__thrust_exec_check_disable__ -template -__host__ __device__ -void iter_swap(ForwardIterator1 iter1, ForwardIterator2 iter2) -{ - // XXX this isn't correct because it doesn't use thrust::swap - using namespace thrust::detail; - - typedef typename thrust::iterator_value::type T; - - T temp = *iter1; - *iter1 = *iter2; - *iter2 = temp; -} - - -__thrust_exec_check_disable__ -template -__host__ __device__ - ForwardIterator partition(sequential::execution_policy &, - ForwardIterator first, - ForwardIterator last, - Predicate pred) -{ - if(first == last) - return first; - - // wrap pred - thrust::detail::wrapped_function< - Predicate, - bool - > wrapped_pred(pred); - - while(wrapped_pred(*first)) - { - if(++first == last) - return first; - } - - ForwardIterator next = first; - - while(++next != last) - { - if(wrapped_pred(*next)) - { - iter_swap(first, next); - ++first; - } - } - - return first; -} - - -__thrust_exec_check_disable__ -template -__host__ __device__ - ForwardIterator partition(sequential::execution_policy &, - ForwardIterator first, - ForwardIterator last, - InputIterator stencil_first, - Predicate pred) -{ - if(first == last) - return first; - - // wrap pred - thrust::detail::wrapped_function< - Predicate, - bool - > wrapped_pred(pred); - - while(wrapped_pred(*stencil_first)) - { - ++stencil_first; - if(++first == last) - { - return first; - } - } - - ForwardIterator next = first; - - // advance stencil to next element as well - ++stencil_first; - - while(++next != last) - { - if(wrapped_pred(*stencil_first)) - { - iter_swap(first, next); - ++first; - } - - ++stencil_first; - } - - return first; -} - - -__thrust_exec_check_disable__ -template -__host__ __device__ - ForwardIterator stable_partition(sequential::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - Predicate pred) -{ - // wrap pred - thrust::detail::wrapped_function< - Predicate, - bool - > wrapped_pred(pred); - - typedef typename thrust::iterator_value::type T; - - typedef thrust::detail::temporary_array TempRange; - typedef typename TempRange::iterator TempIterator; - - TempRange temp(exec, first, last); - - for(TempIterator iter = temp.begin(); iter != temp.end(); ++iter) - { - if(wrapped_pred(*iter)) - { - *first = *iter; - ++first; - } - } - - ForwardIterator middle = first; - - for(TempIterator iter = temp.begin(); iter != temp.end(); ++iter) - { - if(!wrapped_pred(*iter)) - { - *first = *iter; - ++first; - } - } - - return middle; -} - - -__thrust_exec_check_disable__ -template -__host__ __device__ - ForwardIterator stable_partition(sequential::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator stencil, - Predicate pred) -{ - // wrap pred - thrust::detail::wrapped_function< - Predicate, - bool - > wrapped_pred(pred); - - typedef typename thrust::iterator_value::type T; - - typedef thrust::detail::temporary_array TempRange; - typedef typename TempRange::iterator TempIterator; - - TempRange temp(exec, first, last); - - InputIterator stencil_iter = stencil; - for(TempIterator iter = temp.begin(); iter != temp.end(); ++iter, ++stencil_iter) - { - if(wrapped_pred(*stencil_iter)) - { - *first = *iter; - ++first; - } - } - - ForwardIterator middle = first; - stencil_iter = stencil; - - for(TempIterator iter = temp.begin(); iter != temp.end(); ++iter, ++stencil_iter) - { - if(!wrapped_pred(*stencil_iter)) - { - *first = *iter; - ++first; - } - } - - return middle; -} - - -__thrust_exec_check_disable__ -template -__host__ __device__ - thrust::pair - stable_partition_copy(sequential::execution_policy &, - InputIterator first, - InputIterator last, - OutputIterator1 out_true, - OutputIterator2 out_false, - Predicate pred) -{ - // wrap pred - thrust::detail::wrapped_function< - Predicate, - bool - > wrapped_pred(pred); - - for(; first != last; ++first) - { - if(wrapped_pred(*first)) - { - *out_true = *first; - ++out_true; - } // end if - else - { - *out_false = *first; - ++out_false; - } // end else - } - - return thrust::make_pair(out_true, out_false); -} - - -__thrust_exec_check_disable__ -template -__host__ __device__ - thrust::pair - stable_partition_copy(sequential::execution_policy &, - InputIterator1 first, - InputIterator1 last, - InputIterator2 stencil, - OutputIterator1 out_true, - OutputIterator2 out_false, - Predicate pred) -{ - // wrap pred - thrust::detail::wrapped_function< - Predicate, - bool - > wrapped_pred(pred); - - for(; first != last; ++first, ++stencil) - { - if(wrapped_pred(*stencil)) - { - *out_true = *first; - ++out_true; - } // end if - else - { - *out_false = *first; - ++out_false; - } // end else - } - - return thrust::make_pair(out_true, out_false); -} - - -} // end namespace sequential -} // end namespace detail -} // end namespace system -} // end namespace thrust - diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/CONTRIBUTING.md b/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/CONTRIBUTING.md deleted file mode 100644 index 263991c9496cf29ed4b99e03a9fb9a38e6bfaf86..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/CONTRIBUTING.md +++ /dev/null @@ -1,31 +0,0 @@ -# Contributing to segment-anything -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests -We actively welcome your pull requests. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints, using the `linter.sh` script in the project's root directory. Linting requires `black==23.*`, `isort==5.12.0`, `flake8`, and `mypy`. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Facebook's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to segment-anything, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. diff --git a/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/__init__.py b/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_seq2seq.py b/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_seq2seq.py deleted file mode 100644 index be0ad33b89f345dae3a85c0ad286981c4bed0b62..0000000000000000000000000000000000000000 --- a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_seq2seq.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Any, Dict, List, Optional, Tuple, Union - -import torch -from torch import nn -from torch.utils.data import Dataset - -from .deepspeed import is_deepspeed_zero3_enabled -from .trainer import Trainer -from .trainer_utils import PredictionOutput -from .utils import logging - - -logger = logging.get_logger(__name__) - - -class Seq2SeqTrainer(Trainer): - def evaluate( - self, - eval_dataset: Optional[Dataset] = None, - ignore_keys: Optional[List[str]] = None, - metric_key_prefix: str = "eval", - **gen_kwargs, - ) -> Dict[str, float]: - """ - Run evaluation and returns metrics. - - The calling script will be responsible for providing a method to compute metrics, as they are task-dependent - (pass it to the init `compute_metrics` argument). - - You can also subclass and override this method to inject custom behavior. - - Args: - eval_dataset (`Dataset`, *optional*): - Pass a dataset if you wish to override `self.eval_dataset`. If it is an [`~datasets.Dataset`], columns - not accepted by the `model.forward()` method are automatically removed. It must implement the `__len__` - method. - ignore_keys (`List[str]`, *optional*): - A list of keys in the output of your model (if it is a dictionary) that should be ignored when - gathering predictions. - metric_key_prefix (`str`, *optional*, defaults to `"eval"`): - An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named - "eval_bleu" if the prefix is `"eval"` (default) - max_length (`int`, *optional*): - The maximum target length to use when predicting with the generate method. - num_beams (`int`, *optional*): - Number of beams for beam search that will be used when predicting with the generate method. 1 means no - beam search. - gen_kwargs: - Additional `generate` specific kwargs. - - Returns: - A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The - dictionary also contains the epoch number which comes from the training state. - """ - - gen_kwargs = gen_kwargs.copy() - if gen_kwargs.get("max_length") is None and gen_kwargs.get("max_new_tokens") is None: - gen_kwargs["max_length"] = self.args.generation_max_length - gen_kwargs["num_beams"] = ( - gen_kwargs["num_beams"] if gen_kwargs.get("num_beams") is not None else self.args.generation_num_beams - ) - self._gen_kwargs = gen_kwargs - - return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) - - def predict( - self, - test_dataset: Dataset, - ignore_keys: Optional[List[str]] = None, - metric_key_prefix: str = "test", - **gen_kwargs, - ) -> PredictionOutput: - """ - Run prediction and returns predictions and potential metrics. - - Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method - will also return metrics, like in `evaluate()`. - - Args: - test_dataset (`Dataset`): - Dataset to run the predictions on. If it is a [`~datasets.Dataset`], columns not accepted by the - `model.forward()` method are automatically removed. Has to implement the method `__len__` - ignore_keys (`List[str]`, *optional*): - A list of keys in the output of your model (if it is a dictionary) that should be ignored when - gathering predictions. - metric_key_prefix (`str`, *optional*, defaults to `"eval"`): - An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named - "eval_bleu" if the prefix is `"eval"` (default) - max_length (`int`, *optional*): - The maximum target length to use when predicting with the generate method. - num_beams (`int`, *optional*): - Number of beams for beam search that will be used when predicting with the generate method. 1 means no - beam search. - gen_kwargs: - Additional `generate` specific kwargs. - - - - If your predictions or labels have different sequence lengths (for instance because you're doing dynamic - padding in a token classification task) the predictions will be padded (on the right) to allow for - concatenation into one array. The padding index is -100. - - - - Returns: *NamedTuple* A namedtuple with the following keys: - - - predictions (`np.ndarray`): The predictions on `test_dataset`. - - label_ids (`np.ndarray`, *optional*): The labels (if the dataset contained some). - - metrics (`Dict[str, float]`, *optional*): The potential dictionary of metrics (if the dataset contained - labels). - """ - - gen_kwargs = gen_kwargs.copy() - if gen_kwargs.get("max_length") is None and gen_kwargs.get("max_new_tokens") is None: - gen_kwargs["max_length"] = self.args.generation_max_length - gen_kwargs["num_beams"] = ( - gen_kwargs["num_beams"] if gen_kwargs.get("num_beams") is not None else self.args.generation_num_beams - ) - self._gen_kwargs = gen_kwargs - - return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) - - def prediction_step( - self, - model: nn.Module, - inputs: Dict[str, Union[torch.Tensor, Any]], - prediction_loss_only: bool, - ignore_keys: Optional[List[str]] = None, - ) -> Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]: - """ - Perform an evaluation step on `model` using `inputs`. - - Subclass and override to inject custom behavior. - - Args: - model (`nn.Module`): - The model to evaluate. - inputs (`Dict[str, Union[torch.Tensor, Any]]`): - The inputs and targets of the model. - - The dictionary will be unpacked before being fed to the model. Most models expect the targets under the - argument `labels`. Check your model's documentation for all accepted arguments. - prediction_loss_only (`bool`): - Whether or not to return the loss only. - - Return: - Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]: A tuple with the loss, logits and - labels (each being optional). - """ - - if not self.args.predict_with_generate or prediction_loss_only: - return super().prediction_step( - model, inputs, prediction_loss_only=prediction_loss_only, ignore_keys=ignore_keys - ) - - has_labels = "labels" in inputs - inputs = self._prepare_inputs(inputs) - - # XXX: adapt synced_gpus for fairscale as well - gen_kwargs = self._gen_kwargs.copy() - if gen_kwargs.get("max_length") is None and gen_kwargs.get("max_new_tokens") is None: - gen_kwargs["max_length"] = self.model.config.max_length - gen_kwargs["num_beams"] = ( - gen_kwargs["num_beams"] if gen_kwargs.get("num_beams") is not None else self.model.config.num_beams - ) - default_synced_gpus = True if is_deepspeed_zero3_enabled() else False - gen_kwargs["synced_gpus"] = ( - gen_kwargs["synced_gpus"] if gen_kwargs.get("synced_gpus") is not None else default_synced_gpus - ) - - # TODO (Joao): the following line is needed to keep a consistent result on SQUAD. Ideally, we should not block - # users from preparing a dataset with `decoder_input_ids`. - inputs = {k: v for k, v in inputs.items() if k != "decoder_input_ids"} - generated_tokens = self.model.generate(**inputs, **gen_kwargs) - - # Temporary hack to ensure the generation config is not initialized for each iteration of the evaluation loop - # TODO: remove this hack when the legacy code that initializes generation_config from a model config is - # removed in https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1183 - if self.model.generation_config._from_model_config: - self.model.generation_config._from_model_config = False - # in case the batch is shorter than max length, the output should be padded - if gen_kwargs.get("max_length") is not None and generated_tokens.shape[-1] < gen_kwargs["max_length"]: - generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_kwargs["max_length"]) - elif gen_kwargs.get("max_new_tokens") is not None and generated_tokens.shape[-1] < ( - gen_kwargs["max_new_tokens"] + 1 - ): - generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_kwargs["max_new_tokens"] + 1) - - with torch.no_grad(): - if has_labels: - with self.compute_loss_context_manager(): - outputs = model(**inputs) - if self.label_smoother is not None: - loss = self.label_smoother(outputs, inputs["labels"]).mean().detach() - else: - loss = (outputs["loss"] if isinstance(outputs, dict) else outputs[0]).mean().detach() - else: - loss = None - - if self.args.prediction_loss_only: - return (loss, None, None) - - if has_labels: - labels = inputs["labels"] - if gen_kwargs.get("max_length") is not None and labels.shape[-1] < gen_kwargs["max_length"]: - labels = self._pad_tensors_to_max_len(labels, gen_kwargs["max_length"]) - elif gen_kwargs.get("max_new_tokens") is not None and labels.shape[-1] < ( - gen_kwargs["max_new_tokens"] + 1 - ): - labels = self._pad_tensors_to_max_len(labels, (gen_kwargs["max_new_tokens"] + 1)) - else: - labels = None - - return (loss, generated_tokens, labels) - - def _pad_tensors_to_max_len(self, tensor, max_length): - if self.tokenizer is not None and hasattr(self.tokenizer, "pad_token_id"): - # If PAD token is not defined at least EOS token has to be defined - pad_token_id = ( - self.tokenizer.pad_token_id if self.tokenizer.pad_token_id is not None else self.tokenizer.eos_token_id - ) - else: - if self.model.config.pad_token_id is not None: - pad_token_id = self.model.config.pad_token_id - else: - raise ValueError("Pad_token_id must be set in the configuration of the model, in order to pad tensors") - - padded_tensor = pad_token_id * torch.ones( - (tensor.shape[0], max_length), dtype=tensor.dtype, device=tensor.device - ) - padded_tensor[:, : tensor.shape[-1]] = tensor - return padded_tensor \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/momentsPen.c b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/momentsPen.c deleted file mode 100644 index c62288eb66721af85314bd75c3f98a622c112012..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/momentsPen.c +++ /dev/null @@ -1,10242 +0,0 @@ -/* Generated by Cython 0.29.36 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "fontTools.pens.momentsPen", - "sources": [ - "Lib/fontTools/pens/momentsPen.py" - ] - }, - "module_name": "fontTools.pens.momentsPen" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.6+ or Python 3.3+. -#else -#define CYTHON_ABI "0_29_36" -#define CYTHON_HEX_VERSION 0x001D24F0 -#define CYTHON_FUTURE_DIVISION 1 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #if PY_VERSION_HEX >= 0x02070000 - #define HAVE_LONG_LONG - #endif -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#ifdef PYPY_VERSION - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #if PY_VERSION_HEX < 0x03090000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1 && PYPY_VERSION_NUM >= 0x07030C00) - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PYSTON_VERSION) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PY_NOGIL) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #ifndef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #elif !defined(CYTHON_USE_PYLONG_INTERNALS) - #define CYTHON_USE_PYLONG_INTERNALS (PY_VERSION_HEX < 0x030C00A5) - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #if PY_VERSION_HEX >= 0x030B00A4 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #elif !defined(CYTHON_FAST_THREAD_STATE) - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL (PY_VERSION_HEX < 0x030A0000) - #endif - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000) - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) - #endif - #ifndef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS ((PY_VERSION_HEX >= 0x030600B1) && (PY_VERSION_HEX < 0x030C00A5)) - #endif - #if PY_VERSION_HEX >= 0x030B00A4 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #elif !defined(CYTHON_USE_EXC_INFO_STACK) - #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3) - #endif - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 1 - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_MAJOR_VERSION < 3 - #include "longintrepr.h" - #endif - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int32 uint32_t; - #endif - #endif -#else - #include -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) && __cplusplus >= 201103L - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #elif __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__ ) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) - #define __Pyx_DefaultClassType PyClass_Type -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" - #define __Pyx_DefaultClassType PyType_Type -#if PY_VERSION_HEX >= 0x030B00A1 - static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL; - PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *call_result=NULL, *empty=NULL; - const char *fn_cstr=NULL; - const char *name_cstr=NULL; - PyCodeObject* co=NULL; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - if (!(kwds=PyDict_New())) goto end; - if (!(argcount=PyLong_FromLong(a))) goto end; - if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end; - if (!(posonlyargcount=PyLong_FromLong(0))) goto end; - if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end; - if (!(kwonlyargcount=PyLong_FromLong(k))) goto end; - if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end; - if (!(nlocals=PyLong_FromLong(l))) goto end; - if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end; - if (!(stacksize=PyLong_FromLong(s))) goto end; - if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end; - if (!(flags=PyLong_FromLong(f))) goto end; - if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end; - if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end; - if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end; - if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end; - if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto cleanup_code_too; - if (!(empty = PyTuple_New(0))) goto cleanup_code_too; // unfortunately __pyx_empty_tuple isn't available here - if (!(call_result = PyObject_Call(replace, empty, kwds))) goto cleanup_code_too; - Py_XDECREF((PyObject*)co); - co = (PyCodeObject*)call_result; - call_result = NULL; - if (0) { - cleanup_code_too: - Py_XDECREF((PyObject*)co); - co = NULL; - } - end: - Py_XDECREF(kwds); - Py_XDECREF(argcount); - Py_XDECREF(posonlyargcount); - Py_XDECREF(kwonlyargcount); - Py_XDECREF(nlocals); - Py_XDECREF(stacksize); - Py_XDECREF(replace); - Py_XDECREF(call_result); - Py_XDECREF(empty); - if (type) { - PyErr_Restore(type, value, traceback); - } - return co; - } -#else - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif - #define __Pyx_DefaultClassType PyType_Type -#endif -#if PY_VERSION_HEX >= 0x030900F0 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_GC_IsFinalized(o) PyObject_GC_IsFinalized(o) -#else - #define __Pyx_PyObject_GC_IsFinalized(o) _PyGC_FINALIZED(o) -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_FAST_PYCCALL -#define __Pyx_PyFastCFunction_Check(func)\ - ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))))) -#else -#define __Pyx_PyFastCFunction_Check(func) 0 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1 - #define PyMem_RawMalloc(n) PyMem_Malloc(n) - #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n) - #define PyMem_RawFree(p) PyMem_Free(p) -#endif -#if CYTHON_COMPILING_IN_PYSTON - #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -#else -#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name) -#endif -#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_READY(op) (0) - #else - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #endif - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #else - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) - #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func)) -#else - #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #if !defined(_USE_MATH_DEFINES) - #define _USE_MATH_DEFINES - #endif -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifndef __PYX_EXTERN_C - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__fontTools__pens__momentsPen -#define __PYX_HAVE_API__fontTools__pens__momentsPen -/* Early includes */ -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -static PyObject *__pyx_m = NULL; -static PyObject *__pyx_d; -static PyObject *__pyx_b; -static PyObject *__pyx_cython_runtime = NULL; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static PyObject *__pyx_empty_unicode; -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm= __FILE__; -static const char *__pyx_filename; - - -static const char *__pyx_f[] = { - "Lib/fontTools/pens/momentsPen.py", -}; - -/*--- Type declarations ---*/ - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, int); - void (*DECREF)(void*, PyObject*, int); - void (*GOTREF)(void*, PyObject*, int); - void (*GIVEREF)(void*, PyObject*, int); - void* (*SetupContext)(const char*, int, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) -#endif - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ - const char* function_name); - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) do {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -#define __Pyx_GetModuleGlobalNameUncached(var, name) do {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#else -#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if CYTHON_FAST_PYCALL - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif // CYTHON_FAST_PYCALL -#endif - -/* PyCFunctionFastCall.proto */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); -#else -#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) -#endif - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyObjectSetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -#define __Pyx_PyObject_DelAttrStr(o,n) __Pyx_PyObject_SetAttrStr(o, n, NULL) -static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value); -#else -#define __Pyx_PyObject_DelAttrStr(o,n) PyObject_DelAttr(o,n) -#define __Pyx_PyObject_SetAttrStr(o,n,v) PyObject_SetAttr(o,n,v) -#endif - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectCallNoArg.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); -#else -#define __Pyx_PyObject_CallNoArg(func) __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL) -#endif - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* PyObjectCall2Args.proto */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* IterFinish.proto */ -static CYTHON_INLINE int __Pyx_IterFinish(void); - -/* UnpackItemEndCheck.proto */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected); - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* CalculateMetaclass.proto */ -static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases); - -/* FetchCommonType.proto */ -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type); - -/* CythonFunctionShared.proto */ -#define __Pyx_CyFunction_USED 1 -#define __Pyx_CYFUNCTION_STATICMETHOD 0x01 -#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02 -#define __Pyx_CYFUNCTION_CCLASS 0x04 -#define __Pyx_CyFunction_GetClosure(f)\ - (((__pyx_CyFunctionObject *) (f))->func_closure) -#define __Pyx_CyFunction_GetClassObj(f)\ - (((__pyx_CyFunctionObject *) (f))->func_classobj) -#define __Pyx_CyFunction_Defaults(type, f)\ - ((type *)(((__pyx_CyFunctionObject *) (f))->defaults)) -#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\ - ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g) -typedef struct { - PyCFunctionObject func; -#if PY_VERSION_HEX < 0x030500A0 - PyObject *func_weakreflist; -#endif - PyObject *func_dict; - PyObject *func_name; - PyObject *func_qualname; - PyObject *func_doc; - PyObject *func_globals; - PyObject *func_code; - PyObject *func_closure; - PyObject *func_classobj; - void *defaults; - int defaults_pyobjects; - size_t defaults_size; // used by FusedFunction for copying defaults - int flags; - PyObject *defaults_tuple; - PyObject *defaults_kwdict; - PyObject *(*defaults_getter)(PyObject *); - PyObject *func_annotations; -} __pyx_CyFunctionObject; -static PyTypeObject *__pyx_CyFunctionType = 0; -#define __Pyx_CyFunction_Check(obj) (__Pyx_TypeCheck(obj, __pyx_CyFunctionType)) -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject* op, PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *self, - PyObject *module, PyObject *globals, - PyObject* code); -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m, - size_t size, - int pyobjects); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m, - PyObject *tuple); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m, - PyObject *dict); -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m, - PyObject *dict); -static int __pyx_CyFunction_init(void); - -/* CythonFunction.proto */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); - -/* SetNameInClass.proto */ -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 -#define __Pyx_SetNameInClass(ns, name, value)\ - (likely(PyDict_CheckExact(ns)) ? _PyDict_SetItem_KnownHash(ns, name, value, ((PyASCIIObject *) name)->hash) : PyObject_SetItem(ns, name, value)) -#elif CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_SetNameInClass(ns, name, value)\ - (likely(PyDict_CheckExact(ns)) ? PyDict_SetItem(ns, name, value) : PyObject_SetItem(ns, name, value)) -#else -#define __Pyx_SetNameInClass(ns, name, value) PyObject_SetItem(ns, name, value) -#endif - -/* Py3ClassCreate.proto */ -static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, PyObject *qualname, - PyObject *mkw, PyObject *modname, PyObject *doc); -static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, PyObject *dict, - PyObject *mkw, int calculate_metaclass, int allow_py2_metaclass); - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -/* GCCDiagnostics.proto */ -#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - - -/* Module declarations from 'cython' */ - -/* Module declarations from 'fontTools.pens.momentsPen' */ -#define __Pyx_MODULE_NAME "fontTools.pens.momentsPen" -extern int __pyx_module_is_main_fontTools__pens__momentsPen; -int __pyx_module_is_main_fontTools__pens__momentsPen = 0; - -/* Implementation of 'fontTools.pens.momentsPen' */ -static PyObject *__pyx_builtin_AttributeError; -static PyObject *__pyx_builtin_ImportError; -static const char __pyx_k_x[] = "x"; -static const char __pyx_k_y[] = "y"; -static const char __pyx_k_p0[] = "p0"; -static const char __pyx_k_p1[] = "p1"; -static const char __pyx_k_p2[] = "p2"; -static const char __pyx_k_p3[] = "p3"; -static const char __pyx_k_r0[] = "r0"; -static const char __pyx_k_r1[] = "r1"; -static const char __pyx_k_r2[] = "r2"; -static const char __pyx_k_r3[] = "r3"; -static const char __pyx_k_r4[] = "r4"; -static const char __pyx_k_r5[] = "r5"; -static const char __pyx_k_r6[] = "r6"; -static const char __pyx_k_r7[] = "r7"; -static const char __pyx_k_r8[] = "r8"; -static const char __pyx_k_r9[] = "r9"; -static const char __pyx_k_x0[] = "x0"; -static const char __pyx_k_x1[] = "x1"; -static const char __pyx_k_x2[] = "x2"; -static const char __pyx_k_x3[] = "x3"; -static const char __pyx_k_y0[] = "y0"; -static const char __pyx_k_y1[] = "y1"; -static const char __pyx_k_y2[] = "y2"; -static const char __pyx_k_y3[] = "y3"; -static const char __pyx_k_all[] = "__all__"; -static const char __pyx_k_doc[] = "__doc__"; -static const char __pyx_k_r10[] = "r10"; -static const char __pyx_k_r11[] = "r11"; -static const char __pyx_k_r12[] = "r12"; -static const char __pyx_k_r13[] = "r13"; -static const char __pyx_k_r14[] = "r14"; -static const char __pyx_k_r15[] = "r15"; -static const char __pyx_k_r16[] = "r16"; -static const char __pyx_k_r17[] = "r17"; -static const char __pyx_k_r18[] = "r18"; -static const char __pyx_k_r19[] = "r19"; -static const char __pyx_k_r20[] = "r20"; -static const char __pyx_k_r21[] = "r21"; -static const char __pyx_k_r22[] = "r22"; -static const char __pyx_k_r23[] = "r23"; -static const char __pyx_k_r24[] = "r24"; -static const char __pyx_k_r25[] = "r25"; -static const char __pyx_k_r26[] = "r26"; -static const char __pyx_k_r27[] = "r27"; -static const char __pyx_k_r28[] = "r28"; -static const char __pyx_k_r29[] = "r29"; -static const char __pyx_k_r30[] = "r30"; -static const char __pyx_k_r31[] = "r31"; -static const char __pyx_k_r32[] = "r32"; -static const char __pyx_k_r33[] = "r33"; -static const char __pyx_k_r34[] = "r34"; -static const char __pyx_k_r35[] = "r35"; -static const char __pyx_k_r36[] = "r36"; -static const char __pyx_k_r37[] = "r37"; -static const char __pyx_k_r38[] = "r38"; -static const char __pyx_k_r39[] = "r39"; -static const char __pyx_k_r40[] = "r40"; -static const char __pyx_k_r41[] = "r41"; -static const char __pyx_k_r42[] = "r42"; -static const char __pyx_k_r43[] = "r43"; -static const char __pyx_k_r44[] = "r44"; -static const char __pyx_k_r45[] = "r45"; -static const char __pyx_k_r46[] = "r46"; -static const char __pyx_k_r47[] = "r47"; -static const char __pyx_k_r48[] = "r48"; -static const char __pyx_k_r49[] = "r49"; -static const char __pyx_k_r50[] = "r50"; -static const char __pyx_k_r51[] = "r51"; -static const char __pyx_k_r52[] = "r52"; -static const char __pyx_k_r53[] = "r53"; -static const char __pyx_k_r54[] = "r54"; -static const char __pyx_k_r55[] = "r55"; -static const char __pyx_k_r56[] = "r56"; -static const char __pyx_k_r57[] = "r57"; -static const char __pyx_k_r58[] = "r58"; -static const char __pyx_k_r59[] = "r59"; -static const char __pyx_k_r60[] = "r60"; -static const char __pyx_k_r61[] = "r61"; -static const char __pyx_k_r62[] = "r62"; -static const char __pyx_k_r63[] = "r63"; -static const char __pyx_k_r64[] = "r64"; -static const char __pyx_k_r65[] = "r65"; -static const char __pyx_k_r66[] = "r66"; -static const char __pyx_k_r67[] = "r67"; -static const char __pyx_k_r68[] = "r68"; -static const char __pyx_k_r69[] = "r69"; -static const char __pyx_k_r70[] = "r70"; -static const char __pyx_k_r71[] = "r71"; -static const char __pyx_k_r72[] = "r72"; -static const char __pyx_k_r73[] = "r73"; -static const char __pyx_k_r74[] = "r74"; -static const char __pyx_k_r75[] = "r75"; -static const char __pyx_k_r76[] = "r76"; -static const char __pyx_k_r77[] = "r77"; -static const char __pyx_k_r78[] = "r78"; -static const char __pyx_k_r79[] = "r79"; -static const char __pyx_k_r80[] = "r80"; -static const char __pyx_k_r81[] = "r81"; -static const char __pyx_k_r82[] = "r82"; -static const char __pyx_k_r83[] = "r83"; -static const char __pyx_k_r84[] = "r84"; -static const char __pyx_k_r85[] = "r85"; -static const char __pyx_k_r86[] = "r86"; -static const char __pyx_k_r87[] = "r87"; -static const char __pyx_k_r88[] = "r88"; -static const char __pyx_k_r89[] = "r89"; -static const char __pyx_k_r90[] = "r90"; -static const char __pyx_k_r91[] = "r91"; -static const char __pyx_k_r92[] = "r92"; -static const char __pyx_k_r93[] = "r93"; -static const char __pyx_k_r94[] = "r94"; -static const char __pyx_k_r95[] = "r95"; -static const char __pyx_k_r96[] = "r96"; -static const char __pyx_k_r97[] = "r97"; -static const char __pyx_k_r98[] = "r98"; -static const char __pyx_k_r99[] = "r99"; -static const char __pyx_k_area[] = "area"; -static const char __pyx_k_init[] = "__init__"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_name[] = "__name__"; -static const char __pyx_k_r100[] = "r100"; -static const char __pyx_k_r101[] = "r101"; -static const char __pyx_k_r102[] = "r102"; -static const char __pyx_k_r103[] = "r103"; -static const char __pyx_k_r104[] = "r104"; -static const char __pyx_k_r105[] = "r105"; -static const char __pyx_k_r106[] = "r106"; -static const char __pyx_k_r107[] = "r107"; -static const char __pyx_k_r108[] = "r108"; -static const char __pyx_k_r109[] = "r109"; -static const char __pyx_k_r110[] = "r110"; -static const char __pyx_k_r111[] = "r111"; -static const char __pyx_k_r112[] = "r112"; -static const char __pyx_k_r113[] = "r113"; -static const char __pyx_k_r114[] = "r114"; -static const char __pyx_k_r115[] = "r115"; -static const char __pyx_k_r116[] = "r116"; -static const char __pyx_k_r117[] = "r117"; -static const char __pyx_k_r118[] = "r118"; -static const char __pyx_k_r119[] = "r119"; -static const char __pyx_k_r120[] = "r120"; -static const char __pyx_k_r121[] = "r121"; -static const char __pyx_k_r122[] = "r122"; -static const char __pyx_k_r123[] = "r123"; -static const char __pyx_k_r124[] = "r124"; -static const char __pyx_k_r125[] = "r125"; -static const char __pyx_k_r126[] = "r126"; -static const char __pyx_k_r127[] = "r127"; -static const char __pyx_k_r128[] = "r128"; -static const char __pyx_k_r129[] = "r129"; -static const char __pyx_k_r130[] = "r130"; -static const char __pyx_k_r131[] = "r131"; -static const char __pyx_k_r132[] = "r132"; -static const char __pyx_k_self[] = "self"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_cython[] = "cython"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_lineTo[] = "_lineTo"; -static const char __pyx_k_module[] = "__module__"; -static const char __pyx_k_moveTo[] = "_moveTo"; -static const char __pyx_k_BasePen[] = "BasePen"; -static const char __pyx_k_endPath[] = "_endPath"; -static const char __pyx_k_momentX[] = "momentX"; -static const char __pyx_k_momentY[] = "momentY"; -static const char __pyx_k_prepare[] = "__prepare__"; -static const char __pyx_k_COMPILED[] = "COMPILED"; -static const char __pyx_k_glyphset[] = "glyphset"; -static const char __pyx_k_momentXX[] = "momentXX"; -static const char __pyx_k_momentXY[] = "momentXY"; -static const char __pyx_k_momentYY[] = "momentYY"; -static const char __pyx_k_qualname[] = "__qualname__"; -static const char __pyx_k_closePath[] = "_closePath"; -static const char __pyx_k_metaclass[] = "__metaclass__"; -static const char __pyx_k_MomentsPen[] = "MomentsPen"; -static const char __pyx_k_curveToOne[] = "_curveToOne"; -static const char __pyx_k_ImportError[] = "ImportError"; -static const char __pyx_k_qCurveToOne[] = "_qCurveToOne"; -static const char __pyx_k_printGreenPen[] = "printGreenPen"; -static const char __pyx_k_AttributeError[] = "AttributeError"; -static const char __pyx_k_fontTools_misc[] = "fontTools.misc"; -static const char __pyx_k_getCurrentPoint[] = "_getCurrentPoint"; -static const char __pyx_k_OpenContourError[] = "OpenContourError"; -static const char __pyx_k_MomentsPen___init[] = "MomentsPen.__init__"; -static const char __pyx_k_MomentsPen__lineTo[] = "MomentsPen._lineTo"; -static const char __pyx_k_MomentsPen__moveTo[] = "MomentsPen._moveTo"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_MomentsPen__endPath[] = "MomentsPen._endPath"; -static const char __pyx_k_MomentsPen__closePath[] = "MomentsPen._closePath"; -static const char __pyx_k_MomentsPen__curveToOne[] = "MomentsPen._curveToOne"; -static const char __pyx_k_MomentsPen__startPoint[] = "_MomentsPen__startPoint"; -static const char __pyx_k_fontTools_misc_symfont[] = "fontTools.misc.symfont"; -static const char __pyx_k_fontTools_pens_basePen[] = "fontTools.pens.basePen"; -static const char __pyx_k_MomentsPen__qCurveToOne[] = "MomentsPen._qCurveToOne"; -static const char __pyx_k_fontTools_pens_momentsPen[] = "fontTools.pens.momentsPen"; -static const char __pyx_k_Green_theorem_is_not_defined_on[] = "Green theorem is not defined on open contours."; -static const char __pyx_k_Lib_fontTools_pens_momentsPen_py[] = "Lib/fontTools/pens/momentsPen.py"; -static PyObject *__pyx_n_s_AttributeError; -static PyObject *__pyx_n_s_BasePen; -static PyObject *__pyx_n_s_COMPILED; -static PyObject *__pyx_kp_u_Green_theorem_is_not_defined_on; -static PyObject *__pyx_n_s_ImportError; -static PyObject *__pyx_kp_s_Lib_fontTools_pens_momentsPen_py; -static PyObject *__pyx_n_s_MomentsPen; -static PyObject *__pyx_n_u_MomentsPen; -static PyObject *__pyx_n_s_MomentsPen___init; -static PyObject *__pyx_n_s_MomentsPen__closePath; -static PyObject *__pyx_n_s_MomentsPen__curveToOne; -static PyObject *__pyx_n_s_MomentsPen__endPath; -static PyObject *__pyx_n_s_MomentsPen__lineTo; -static PyObject *__pyx_n_s_MomentsPen__moveTo; -static PyObject *__pyx_n_s_MomentsPen__qCurveToOne; -static PyObject *__pyx_n_s_MomentsPen__startPoint; -static PyObject *__pyx_n_s_OpenContourError; -static PyObject *__pyx_n_s_all; -static PyObject *__pyx_n_s_area; -static PyObject *__pyx_n_u_area; -static PyObject *__pyx_n_s_cline_in_traceback; -static PyObject *__pyx_n_s_closePath; -static PyObject *__pyx_n_s_curveToOne; -static PyObject *__pyx_n_s_cython; -static PyObject *__pyx_n_s_doc; -static PyObject *__pyx_n_s_endPath; -static PyObject *__pyx_n_s_fontTools_misc; -static PyObject *__pyx_n_s_fontTools_misc_symfont; -static PyObject *__pyx_n_s_fontTools_pens_basePen; -static PyObject *__pyx_n_s_fontTools_pens_momentsPen; -static PyObject *__pyx_n_s_getCurrentPoint; -static PyObject *__pyx_n_s_glyphset; -static PyObject *__pyx_n_s_import; -static PyObject *__pyx_n_s_init; -static PyObject *__pyx_n_s_lineTo; -static PyObject *__pyx_n_s_main; -static PyObject *__pyx_n_u_main; -static PyObject *__pyx_n_s_metaclass; -static PyObject *__pyx_n_s_module; -static PyObject *__pyx_n_s_momentX; -static PyObject *__pyx_n_u_momentX; -static PyObject *__pyx_n_s_momentXX; -static PyObject *__pyx_n_u_momentXX; -static PyObject *__pyx_n_s_momentXY; -static PyObject *__pyx_n_u_momentXY; -static PyObject *__pyx_n_s_momentY; -static PyObject *__pyx_n_u_momentY; -static PyObject *__pyx_n_s_momentYY; -static PyObject *__pyx_n_u_momentYY; -static PyObject *__pyx_n_s_moveTo; -static PyObject *__pyx_n_s_name; -static PyObject *__pyx_n_s_p0; -static PyObject *__pyx_n_s_p1; -static PyObject *__pyx_n_s_p2; -static PyObject *__pyx_n_s_p3; -static PyObject *__pyx_n_s_prepare; -static PyObject *__pyx_n_s_printGreenPen; -static PyObject *__pyx_n_s_qCurveToOne; -static PyObject *__pyx_n_s_qualname; -static PyObject *__pyx_n_s_r0; -static PyObject *__pyx_n_s_r1; -static PyObject *__pyx_n_s_r10; -static PyObject *__pyx_n_s_r100; -static PyObject *__pyx_n_s_r101; -static PyObject *__pyx_n_s_r102; -static PyObject *__pyx_n_s_r103; -static PyObject *__pyx_n_s_r104; -static PyObject *__pyx_n_s_r105; -static PyObject *__pyx_n_s_r106; -static PyObject *__pyx_n_s_r107; -static PyObject *__pyx_n_s_r108; -static PyObject *__pyx_n_s_r109; -static PyObject *__pyx_n_s_r11; -static PyObject *__pyx_n_s_r110; -static PyObject *__pyx_n_s_r111; -static PyObject *__pyx_n_s_r112; -static PyObject *__pyx_n_s_r113; -static PyObject *__pyx_n_s_r114; -static PyObject *__pyx_n_s_r115; -static PyObject *__pyx_n_s_r116; -static PyObject *__pyx_n_s_r117; -static PyObject *__pyx_n_s_r118; -static PyObject *__pyx_n_s_r119; -static PyObject *__pyx_n_s_r12; -static PyObject *__pyx_n_s_r120; -static PyObject *__pyx_n_s_r121; -static PyObject *__pyx_n_s_r122; -static PyObject *__pyx_n_s_r123; -static PyObject *__pyx_n_s_r124; -static PyObject *__pyx_n_s_r125; -static PyObject *__pyx_n_s_r126; -static PyObject *__pyx_n_s_r127; -static PyObject *__pyx_n_s_r128; -static PyObject *__pyx_n_s_r129; -static PyObject *__pyx_n_s_r13; -static PyObject *__pyx_n_s_r130; -static PyObject *__pyx_n_s_r131; -static PyObject *__pyx_n_s_r132; -static PyObject *__pyx_n_s_r14; -static PyObject *__pyx_n_s_r15; -static PyObject *__pyx_n_s_r16; -static PyObject *__pyx_n_s_r17; -static PyObject *__pyx_n_s_r18; -static PyObject *__pyx_n_s_r19; -static PyObject *__pyx_n_s_r2; -static PyObject *__pyx_n_s_r20; -static PyObject *__pyx_n_s_r21; -static PyObject *__pyx_n_s_r22; -static PyObject *__pyx_n_s_r23; -static PyObject *__pyx_n_s_r24; -static PyObject *__pyx_n_s_r25; -static PyObject *__pyx_n_s_r26; -static PyObject *__pyx_n_s_r27; -static PyObject *__pyx_n_s_r28; -static PyObject *__pyx_n_s_r29; -static PyObject *__pyx_n_s_r3; -static PyObject *__pyx_n_s_r30; -static PyObject *__pyx_n_s_r31; -static PyObject *__pyx_n_s_r32; -static PyObject *__pyx_n_s_r33; -static PyObject *__pyx_n_s_r34; -static PyObject *__pyx_n_s_r35; -static PyObject *__pyx_n_s_r36; -static PyObject *__pyx_n_s_r37; -static PyObject *__pyx_n_s_r38; -static PyObject *__pyx_n_s_r39; -static PyObject *__pyx_n_s_r4; -static PyObject *__pyx_n_s_r40; -static PyObject *__pyx_n_s_r41; -static PyObject *__pyx_n_s_r42; -static PyObject *__pyx_n_s_r43; -static PyObject *__pyx_n_s_r44; -static PyObject *__pyx_n_s_r45; -static PyObject *__pyx_n_s_r46; -static PyObject *__pyx_n_s_r47; -static PyObject *__pyx_n_s_r48; -static PyObject *__pyx_n_s_r49; -static PyObject *__pyx_n_s_r5; -static PyObject *__pyx_n_s_r50; -static PyObject *__pyx_n_s_r51; -static PyObject *__pyx_n_s_r52; -static PyObject *__pyx_n_s_r53; -static PyObject *__pyx_n_s_r54; -static PyObject *__pyx_n_s_r55; -static PyObject *__pyx_n_s_r56; -static PyObject *__pyx_n_s_r57; -static PyObject *__pyx_n_s_r58; -static PyObject *__pyx_n_s_r59; -static PyObject *__pyx_n_s_r6; -static PyObject *__pyx_n_s_r60; -static PyObject *__pyx_n_s_r61; -static PyObject *__pyx_n_s_r62; -static PyObject *__pyx_n_s_r63; -static PyObject *__pyx_n_s_r64; -static PyObject *__pyx_n_s_r65; -static PyObject *__pyx_n_s_r66; -static PyObject *__pyx_n_s_r67; -static PyObject *__pyx_n_s_r68; -static PyObject *__pyx_n_s_r69; -static PyObject *__pyx_n_s_r7; -static PyObject *__pyx_n_s_r70; -static PyObject *__pyx_n_s_r71; -static PyObject *__pyx_n_s_r72; -static PyObject *__pyx_n_s_r73; -static PyObject *__pyx_n_s_r74; -static PyObject *__pyx_n_s_r75; -static PyObject *__pyx_n_s_r76; -static PyObject *__pyx_n_s_r77; -static PyObject *__pyx_n_s_r78; -static PyObject *__pyx_n_s_r79; -static PyObject *__pyx_n_s_r8; -static PyObject *__pyx_n_s_r80; -static PyObject *__pyx_n_s_r81; -static PyObject *__pyx_n_s_r82; -static PyObject *__pyx_n_s_r83; -static PyObject *__pyx_n_s_r84; -static PyObject *__pyx_n_s_r85; -static PyObject *__pyx_n_s_r86; -static PyObject *__pyx_n_s_r87; -static PyObject *__pyx_n_s_r88; -static PyObject *__pyx_n_s_r89; -static PyObject *__pyx_n_s_r9; -static PyObject *__pyx_n_s_r90; -static PyObject *__pyx_n_s_r91; -static PyObject *__pyx_n_s_r92; -static PyObject *__pyx_n_s_r93; -static PyObject *__pyx_n_s_r94; -static PyObject *__pyx_n_s_r95; -static PyObject *__pyx_n_s_r96; -static PyObject *__pyx_n_s_r97; -static PyObject *__pyx_n_s_r98; -static PyObject *__pyx_n_s_r99; -static PyObject *__pyx_n_s_self; -static PyObject *__pyx_n_s_test; -static PyObject *__pyx_n_s_x; -static PyObject *__pyx_n_s_x0; -static PyObject *__pyx_n_s_x1; -static PyObject *__pyx_n_s_x2; -static PyObject *__pyx_n_s_x3; -static PyObject *__pyx_n_s_y; -static PyObject *__pyx_n_s_y0; -static PyObject *__pyx_n_s_y1; -static PyObject *__pyx_n_s_y2; -static PyObject *__pyx_n_s_y3; -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_glyphset); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p0); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2); /* proto */ -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2, PyObject *__pyx_v_p3); /* proto */ -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_2; -static PyObject *__pyx_tuple_; -static PyObject *__pyx_tuple__3; -static PyObject *__pyx_tuple__4; -static PyObject *__pyx_tuple__6; -static PyObject *__pyx_tuple__8; -static PyObject *__pyx_tuple__10; -static PyObject *__pyx_tuple__12; -static PyObject *__pyx_tuple__14; -static PyObject *__pyx_tuple__16; -static PyObject *__pyx_codeobj__2; -static PyObject *__pyx_codeobj__5; -static PyObject *__pyx_codeobj__7; -static PyObject *__pyx_codeobj__9; -static PyObject *__pyx_codeobj__11; -static PyObject *__pyx_codeobj__13; -static PyObject *__pyx_codeobj__15; -/* Late includes */ - -/* "fontTools/pens/momentsPen.py":18 - * - * class MomentsPen(BasePen): - * def __init__(self, glyphset=None): # <<<<<<<<<<<<<< - * BasePen.__init__(self, glyphset) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen___init__[] = "MomentsPen.__init__(self, glyphset=None)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__ = {"__init__", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen___init__}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_glyphset = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_glyphset,0}; - PyObject* values[2] = {0,0}; - values[1] = ((PyObject *)((PyObject *)Py_None)); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_glyphset); - if (value) { values[1] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(0, 18, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_self = values[0]; - __pyx_v_glyphset = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 18, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen___init__(__pyx_self, __pyx_v_self, __pyx_v_glyphset); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_glyphset) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__init__", 0); - - /* "fontTools/pens/momentsPen.py":19 - * class MomentsPen(BasePen): - * def __init__(self, glyphset=None): - * BasePen.__init__(self, glyphset) # <<<<<<<<<<<<<< - * - * self.area = 0 - */ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_BasePen); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_init); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = NULL; - __pyx_t_4 = 0; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - __pyx_t_4 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_3)) { - PyObject *__pyx_temp[3] = {__pyx_t_2, __pyx_v_self, __pyx_v_glyphset}; - __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_4, 2+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GOTREF(__pyx_t_1); - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) { - PyObject *__pyx_temp[3] = {__pyx_t_2, __pyx_v_self, __pyx_v_glyphset}; - __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_4, 2+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_GOTREF(__pyx_t_1); - } else - #endif - { - __pyx_t_5 = PyTuple_New(2+__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (__pyx_t_2) { - __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2); __pyx_t_2 = NULL; - } - __Pyx_INCREF(__pyx_v_self); - __Pyx_GIVEREF(__pyx_v_self); - PyTuple_SET_ITEM(__pyx_t_5, 0+__pyx_t_4, __pyx_v_self); - __Pyx_INCREF(__pyx_v_glyphset); - __Pyx_GIVEREF(__pyx_v_glyphset); - PyTuple_SET_ITEM(__pyx_t_5, 1+__pyx_t_4, __pyx_v_glyphset); - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":21 - * BasePen.__init__(self, glyphset) - * - * self.area = 0 # <<<<<<<<<<<<<< - * self.momentX = 0 - * self.momentY = 0 - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_int_0) < 0) __PYX_ERR(0, 21, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":22 - * - * self.area = 0 - * self.momentX = 0 # <<<<<<<<<<<<<< - * self.momentY = 0 - * self.momentXX = 0 - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_int_0) < 0) __PYX_ERR(0, 22, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":23 - * self.area = 0 - * self.momentX = 0 - * self.momentY = 0 # <<<<<<<<<<<<<< - * self.momentXX = 0 - * self.momentXY = 0 - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_int_0) < 0) __PYX_ERR(0, 23, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":24 - * self.momentX = 0 - * self.momentY = 0 - * self.momentXX = 0 # <<<<<<<<<<<<<< - * self.momentXY = 0 - * self.momentYY = 0 - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_int_0) < 0) __PYX_ERR(0, 24, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":25 - * self.momentY = 0 - * self.momentXX = 0 - * self.momentXY = 0 # <<<<<<<<<<<<<< - * self.momentYY = 0 - * - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_int_0) < 0) __PYX_ERR(0, 25, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":26 - * self.momentXX = 0 - * self.momentXY = 0 - * self.momentYY = 0 # <<<<<<<<<<<<<< - * - * def _moveTo(self, p0): - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_int_0) < 0) __PYX_ERR(0, 26, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":18 - * - * class MomentsPen(BasePen): - * def __init__(self, glyphset=None): # <<<<<<<<<<<<<< - * BasePen.__init__(self, glyphset) - * - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":28 - * self.momentYY = 0 - * - * def _moveTo(self, p0): # <<<<<<<<<<<<<< - * self.__startPoint = p0 - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo[] = "MomentsPen._moveTo(self, p0)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo = {"_moveTo", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_p0 = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_moveTo (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p0,0}; - PyObject* values[2] = {0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p0)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_moveTo", 1, 2, 2, 1); __PYX_ERR(0, 28, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_moveTo") < 0)) __PYX_ERR(0, 28, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 2) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_p0 = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_moveTo", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 28, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._moveTo", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo(__pyx_self, __pyx_v_self, __pyx_v_p0); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p0) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_moveTo", 0); - - /* "fontTools/pens/momentsPen.py":29 - * - * def _moveTo(self, p0): - * self.__startPoint = p0 # <<<<<<<<<<<<<< - * - * def _closePath(self): - */ - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint, __pyx_v_p0) < 0) __PYX_ERR(0, 29, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":28 - * self.momentYY = 0 - * - * def _moveTo(self, p0): # <<<<<<<<<<<<<< - * self.__startPoint = p0 - * - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._moveTo", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":31 - * self.__startPoint = p0 - * - * def _closePath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath(PyObject *__pyx_self, PyObject *__pyx_v_self); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath[] = "MomentsPen._closePath(self)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath = {"_closePath", (PyCFunction)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath, METH_O, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath(PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_closePath (wrapper)", 0); - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath(__pyx_self, ((PyObject *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_v_p0 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_closePath", 0); - - /* "fontTools/pens/momentsPen.py":32 - * - * def _closePath(self): - * p0 = self._getCurrentPoint() # <<<<<<<<<<<<<< - * if p0 != self.__startPoint: - * self._lineTo(self.__startPoint) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 32, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_p0 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":33 - * def _closePath(self): - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: # <<<<<<<<<<<<<< - * self._lineTo(self.__startPoint) - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 33, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_RichCompare(__pyx_v_p0, __pyx_t_1, Py_NE); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 33, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(0, 33, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (__pyx_t_4) { - - /* "fontTools/pens/momentsPen.py":34 - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - * self._lineTo(self.__startPoint) # <<<<<<<<<<<<<< - * - * def _endPath(self): - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_lineTo); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 34, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 34, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - } - } - __pyx_t_2 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_5, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 34, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":33 - * def _closePath(self): - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: # <<<<<<<<<<<<<< - * self._lineTo(self.__startPoint) - * - */ - } - - /* "fontTools/pens/momentsPen.py":31 - * self.__startPoint = p0 - * - * def _closePath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._closePath", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_p0); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":36 - * self._lineTo(self.__startPoint) - * - * def _endPath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath(PyObject *__pyx_self, PyObject *__pyx_v_self); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath[] = "MomentsPen._endPath(self)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath = {"_endPath", (PyCFunction)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath, METH_O, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath(PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_endPath (wrapper)", 0); - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath(__pyx_self, ((PyObject *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) { - PyObject *__pyx_v_p0 = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_endPath", 0); - - /* "fontTools/pens/momentsPen.py":37 - * - * def _endPath(self): - * p0 = self._getCurrentPoint() # <<<<<<<<<<<<<< - * if p0 != self.__startPoint: - * # Green theorem is not defined on open contours. - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 37, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_p0 = __pyx_t_1; - __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":38 - * def _endPath(self): - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: # <<<<<<<<<<<<<< - * # Green theorem is not defined on open contours. - * raise OpenContourError("Green theorem is not defined on open contours.") - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyObject_RichCompare(__pyx_v_p0, __pyx_t_1, Py_NE); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if (unlikely(__pyx_t_4)) { - - /* "fontTools/pens/momentsPen.py":40 - * if p0 != self.__startPoint: - * # Green theorem is not defined on open contours. - * raise OpenContourError("Green theorem is not defined on open contours.") # <<<<<<<<<<<<<< - * - * @cython.locals(r0=cython.double) - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_OpenContourError); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 40, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - } - } - __pyx_t_2 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_3, __pyx_kp_u_Green_theorem_is_not_defined_on) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_kp_u_Green_theorem_is_not_defined_on); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 40, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(0, 40, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":38 - * def _endPath(self): - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: # <<<<<<<<<<<<<< - * # Green theorem is not defined on open contours. - * raise OpenContourError("Green theorem is not defined on open contours.") - */ - } - - /* "fontTools/pens/momentsPen.py":36 - * self._lineTo(self.__startPoint) - * - * def _endPath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._endPath", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_p0); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":57 - * @cython.locals(x0=cython.double, y0=cython.double) - * @cython.locals(x1=cython.double, y1=cython.double) - * def _lineTo(self, p1): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo[] = "MomentsPen._lineTo(self, p1)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo = {"_lineTo", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_p1 = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_lineTo (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p1,0}; - PyObject* values[2] = {0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p1)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_lineTo", 1, 2, 2, 1); __PYX_ERR(0, 57, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_lineTo") < 0)) __PYX_ERR(0, 57, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 2) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - } - __pyx_v_self = values[0]; - __pyx_v_p1 = values[1]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_lineTo", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 57, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._lineTo", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo(__pyx_self, __pyx_v_self, __pyx_v_p1); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1) { - double __pyx_v_x1; - double __pyx_v_y1; - double __pyx_v_x0; - double __pyx_v_y0; - double __pyx_v_r12; - double __pyx_v_r11; - double __pyx_v_r10; - double __pyx_v_r9; - double __pyx_v_r8; - double __pyx_v_r7; - double __pyx_v_r6; - double __pyx_v_r5; - double __pyx_v_r4; - double __pyx_v_r3; - double __pyx_v_r2; - double __pyx_v_r1; - double __pyx_v_r0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *(*__pyx_t_5)(PyObject *); - double __pyx_t_6; - double __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_lineTo", 0); - - /* "fontTools/pens/momentsPen.py":58 - * @cython.locals(x1=cython.double, y1=cython.double) - * def _lineTo(self, p1): - * x0, y0 = self._getCurrentPoint() # <<<<<<<<<<<<<< - * x1, y1 = p1 - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 58, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = Py_TYPE(__pyx_t_4)->tp_iternext; - index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 58, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 58, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_2); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 58, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x0 = __pyx_t_6; - __pyx_v_y0 = __pyx_t_7; - - /* "fontTools/pens/momentsPen.py":59 - * def _lineTo(self, p1): - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 # <<<<<<<<<<<<<< - * - * r0 = x1 * y0 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p1))) || (PyList_CheckExact(__pyx_v_p1))) { - PyObject* sequence = __pyx_v_p1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 59, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 59, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 59, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 59, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x1 = __pyx_t_7; - __pyx_v_y1 = __pyx_t_6; - - /* "fontTools/pens/momentsPen.py":61 - * x1, y1 = p1 - * - * r0 = x1 * y0 # <<<<<<<<<<<<<< - * r1 = x1 * y1 - * r2 = x1**2 - */ - __pyx_v_r0 = (__pyx_v_x1 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":62 - * - * r0 = x1 * y0 - * r1 = x1 * y1 # <<<<<<<<<<<<<< - * r2 = x1**2 - * r3 = r2 * y1 - */ - __pyx_v_r1 = (__pyx_v_x1 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":63 - * r0 = x1 * y0 - * r1 = x1 * y1 - * r2 = x1**2 # <<<<<<<<<<<<<< - * r3 = r2 * y1 - * r4 = y0 - y1 - */ - __pyx_v_r2 = pow(__pyx_v_x1, 2.0); - - /* "fontTools/pens/momentsPen.py":64 - * r1 = x1 * y1 - * r2 = x1**2 - * r3 = r2 * y1 # <<<<<<<<<<<<<< - * r4 = y0 - y1 - * r5 = r4 * x0 - */ - __pyx_v_r3 = (__pyx_v_r2 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":65 - * r2 = x1**2 - * r3 = r2 * y1 - * r4 = y0 - y1 # <<<<<<<<<<<<<< - * r5 = r4 * x0 - * r6 = x0**2 - */ - __pyx_v_r4 = (__pyx_v_y0 - __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":66 - * r3 = r2 * y1 - * r4 = y0 - y1 - * r5 = r4 * x0 # <<<<<<<<<<<<<< - * r6 = x0**2 - * r7 = 2 * y0 - */ - __pyx_v_r5 = (__pyx_v_r4 * __pyx_v_x0); - - /* "fontTools/pens/momentsPen.py":67 - * r4 = y0 - y1 - * r5 = r4 * x0 - * r6 = x0**2 # <<<<<<<<<<<<<< - * r7 = 2 * y0 - * r8 = y0**2 - */ - __pyx_v_r6 = pow(__pyx_v_x0, 2.0); - - /* "fontTools/pens/momentsPen.py":68 - * r5 = r4 * x0 - * r6 = x0**2 - * r7 = 2 * y0 # <<<<<<<<<<<<<< - * r8 = y0**2 - * r9 = y1**2 - */ - __pyx_v_r7 = (2.0 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":69 - * r6 = x0**2 - * r7 = 2 * y0 - * r8 = y0**2 # <<<<<<<<<<<<<< - * r9 = y1**2 - * r10 = x1**3 - */ - __pyx_v_r8 = pow(__pyx_v_y0, 2.0); - - /* "fontTools/pens/momentsPen.py":70 - * r7 = 2 * y0 - * r8 = y0**2 - * r9 = y1**2 # <<<<<<<<<<<<<< - * r10 = x1**3 - * r11 = y0**3 - */ - __pyx_v_r9 = pow(__pyx_v_y1, 2.0); - - /* "fontTools/pens/momentsPen.py":71 - * r8 = y0**2 - * r9 = y1**2 - * r10 = x1**3 # <<<<<<<<<<<<<< - * r11 = y0**3 - * r12 = y1**3 - */ - __pyx_v_r10 = pow(__pyx_v_x1, 3.0); - - /* "fontTools/pens/momentsPen.py":72 - * r9 = y1**2 - * r10 = x1**3 - * r11 = y0**3 # <<<<<<<<<<<<<< - * r12 = y1**3 - * - */ - __pyx_v_r11 = pow(__pyx_v_y0, 3.0); - - /* "fontTools/pens/momentsPen.py":73 - * r10 = x1**3 - * r11 = y0**3 - * r12 = y1**3 # <<<<<<<<<<<<<< - * - * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 - */ - __pyx_v_r12 = pow(__pyx_v_y1, 3.0); - - /* "fontTools/pens/momentsPen.py":75 - * r12 = y1**3 - * - * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 # <<<<<<<<<<<<<< - * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 - * self.momentY += ( - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_area); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PyFloat_FromDouble(((((-__pyx_v_r0) / 2.0) - (__pyx_v_r1 / 2.0)) + ((__pyx_v_x0 * (__pyx_v_y0 + __pyx_v_y1)) / 2.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_t_2) < 0) __PYX_ERR(0, 75, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":76 - * - * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 - * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 # <<<<<<<<<<<<<< - * self.momentY += ( - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyFloat_FromDouble(((((((-__pyx_v_r2) * __pyx_v_y0) / 6.0) - (__pyx_v_r3 / 3.0)) - ((__pyx_v_r5 * __pyx_v_x1) / 6.0)) + ((__pyx_v_r6 * (__pyx_v_r7 + __pyx_v_y1)) / 6.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_t_3) < 0) __PYX_ERR(0, 76, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":77 - * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 - * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 - * self.momentY += ( # <<<<<<<<<<<<<< - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - * ) - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":78 - * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 - * self.momentY += ( - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 # <<<<<<<<<<<<<< - * ) - * self.momentXX += ( - */ - __pyx_t_1 = PyFloat_FromDouble(((((((-__pyx_v_r0) * __pyx_v_y1) / 6.0) - ((__pyx_v_r8 * __pyx_v_x1) / 6.0)) - ((__pyx_v_r9 * __pyx_v_x1) / 6.0)) + ((__pyx_v_x0 * ((__pyx_v_r8 + __pyx_v_r9) + (__pyx_v_y0 * __pyx_v_y1))) / 6.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 78, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":77 - * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 - * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 - * self.momentY += ( # <<<<<<<<<<<<<< - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - * ) - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_t_2) < 0) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":80 - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * -r10 * y0 / 12 - * - r10 * y1 / 4 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":85 - * - r2 * r5 / 12 - * - r4 * r6 * x1 / 12 - * + x0**3 * (3 * y0 + y1) / 12 # <<<<<<<<<<<<<< - * ) - * self.momentXY += ( - */ - __pyx_t_1 = PyFloat_FromDouble((((((((-__pyx_v_r10) * __pyx_v_y0) / 12.0) - ((__pyx_v_r10 * __pyx_v_y1) / 4.0)) - ((__pyx_v_r2 * __pyx_v_r5) / 12.0)) - (((__pyx_v_r4 * __pyx_v_r6) * __pyx_v_x1) / 12.0)) + ((pow(__pyx_v_x0, 3.0) * ((3.0 * __pyx_v_y0) + __pyx_v_y1)) / 12.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 85, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":80 - * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * -r10 * y0 / 12 - * - r10 * y1 / 4 - */ - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_t_3) < 0) __PYX_ERR(0, 80, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":87 - * + x0**3 * (3 * y0 + y1) / 12 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * -r2 * r8 / 24 - * - r2 * r9 / 8 - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":92 - * - r3 * r7 / 24 - * + r6 * (r7 * y1 + 3 * r8 + r9) / 24 - * - x0 * x1 * (r8 - r9) / 12 # <<<<<<<<<<<<<< - * ) - * self.momentYY += ( - */ - __pyx_t_1 = PyFloat_FromDouble((((((((-__pyx_v_r2) * __pyx_v_r8) / 24.0) - ((__pyx_v_r2 * __pyx_v_r9) / 8.0)) - ((__pyx_v_r3 * __pyx_v_r7) / 24.0)) + ((__pyx_v_r6 * (((__pyx_v_r7 * __pyx_v_y1) + (3.0 * __pyx_v_r8)) + __pyx_v_r9)) / 24.0)) - (((__pyx_v_x0 * __pyx_v_x1) * (__pyx_v_r8 - __pyx_v_r9)) / 12.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 92, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":87 - * + x0**3 * (3 * y0 + y1) / 12 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * -r2 * r8 / 24 - * - r2 * r9 / 8 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_t_2) < 0) __PYX_ERR(0, 87, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":94 - * - x0 * x1 * (r8 - r9) / 12 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r0 * r9 / 12 - * - r1 * r8 / 12 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentYY); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 94, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":99 - * - r11 * x1 / 12 - * - r12 * x1 / 12 - * + x0 * (r11 + r12 + r8 * y1 + r9 * y0) / 12 # <<<<<<<<<<<<<< - * ) - * - */ - __pyx_t_1 = PyFloat_FromDouble((((((((-__pyx_v_r0) * __pyx_v_r9) / 12.0) - ((__pyx_v_r1 * __pyx_v_r8) / 12.0)) - ((__pyx_v_r11 * __pyx_v_x1) / 12.0)) - ((__pyx_v_r12 * __pyx_v_x1) / 12.0)) + ((__pyx_v_x0 * (((__pyx_v_r11 + __pyx_v_r12) + (__pyx_v_r8 * __pyx_v_y1)) + (__pyx_v_r9 * __pyx_v_y0))) / 12.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 99, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":94 - * - x0 * x1 * (r8 - r9) / 12 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r0 * r9 / 12 - * - r1 * r8 / 12 - */ - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 94, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_t_3) < 0) __PYX_ERR(0, 94, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":57 - * @cython.locals(x0=cython.double, y0=cython.double) - * @cython.locals(x1=cython.double, y1=cython.double) - * def _lineTo(self, p1): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._lineTo", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":159 - * @cython.locals(x1=cython.double, y1=cython.double) - * @cython.locals(x2=cython.double, y2=cython.double) - * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne[] = "MomentsPen._qCurveToOne(self, p1, p2)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne = {"_qCurveToOne", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_p1 = 0; - PyObject *__pyx_v_p2 = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_qCurveToOne (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p1,&__pyx_n_s_p2,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p1)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_qCurveToOne", 1, 3, 3, 1); __PYX_ERR(0, 159, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p2)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_qCurveToOne", 1, 3, 3, 2); __PYX_ERR(0, 159, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_qCurveToOne") < 0)) __PYX_ERR(0, 159, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 3) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - } - __pyx_v_self = values[0]; - __pyx_v_p1 = values[1]; - __pyx_v_p2 = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_qCurveToOne", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 159, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._qCurveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne(__pyx_self, __pyx_v_self, __pyx_v_p1, __pyx_v_p2); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2) { - double __pyx_v_x2; - double __pyx_v_y2; - double __pyx_v_x1; - double __pyx_v_y1; - double __pyx_v_x0; - double __pyx_v_y0; - double __pyx_v_r53; - double __pyx_v_r52; - double __pyx_v_r51; - double __pyx_v_r50; - double __pyx_v_r49; - double __pyx_v_r48; - double __pyx_v_r47; - double __pyx_v_r46; - double __pyx_v_r45; - double __pyx_v_r44; - double __pyx_v_r43; - double __pyx_v_r42; - double __pyx_v_r41; - double __pyx_v_r40; - double __pyx_v_r39; - double __pyx_v_r38; - double __pyx_v_r37; - double __pyx_v_r36; - double __pyx_v_r35; - double __pyx_v_r34; - double __pyx_v_r33; - double __pyx_v_r32; - double __pyx_v_r31; - double __pyx_v_r30; - double __pyx_v_r29; - double __pyx_v_r28; - double __pyx_v_r27; - double __pyx_v_r26; - double __pyx_v_r25; - double __pyx_v_r24; - double __pyx_v_r23; - double __pyx_v_r22; - double __pyx_v_r21; - double __pyx_v_r20; - double __pyx_v_r19; - double __pyx_v_r18; - double __pyx_v_r17; - double __pyx_v_r16; - double __pyx_v_r15; - double __pyx_v_r14; - double __pyx_v_r13; - double __pyx_v_r12; - double __pyx_v_r11; - double __pyx_v_r10; - double __pyx_v_r9; - double __pyx_v_r8; - double __pyx_v_r7; - double __pyx_v_r6; - double __pyx_v_r5; - double __pyx_v_r4; - double __pyx_v_r3; - double __pyx_v_r2; - double __pyx_v_r1; - double __pyx_v_r0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *(*__pyx_t_5)(PyObject *); - double __pyx_t_6; - double __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_qCurveToOne", 0); - - /* "fontTools/pens/momentsPen.py":160 - * @cython.locals(x2=cython.double, y2=cython.double) - * def _qCurveToOne(self, p1, p2): - * x0, y0 = self._getCurrentPoint() # <<<<<<<<<<<<<< - * x1, y1 = p1 - * x2, y2 = p2 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 160, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = Py_TYPE(__pyx_t_4)->tp_iternext; - index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 160, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 160, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_2); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 160, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x0 = __pyx_t_6; - __pyx_v_y0 = __pyx_t_7; - - /* "fontTools/pens/momentsPen.py":161 - * def _qCurveToOne(self, p1, p2): - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 # <<<<<<<<<<<<<< - * x2, y2 = p2 - * - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p1))) || (PyList_CheckExact(__pyx_v_p1))) { - PyObject* sequence = __pyx_v_p1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 161, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 161, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 161, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 161, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x1 = __pyx_t_7; - __pyx_v_y1 = __pyx_t_6; - - /* "fontTools/pens/momentsPen.py":162 - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - * x2, y2 = p2 # <<<<<<<<<<<<<< - * - * r0 = 2 * y1 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p2))) || (PyList_CheckExact(__pyx_v_p2))) { - PyObject* sequence = __pyx_v_p2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 162, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_3 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 1; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 162, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 162, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 162, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_x2 = __pyx_t_6; - __pyx_v_y2 = __pyx_t_7; - - /* "fontTools/pens/momentsPen.py":164 - * x2, y2 = p2 - * - * r0 = 2 * y1 # <<<<<<<<<<<<<< - * r1 = r0 * x2 - * r2 = x2 * y2 - */ - __pyx_v_r0 = (2.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":165 - * - * r0 = 2 * y1 - * r1 = r0 * x2 # <<<<<<<<<<<<<< - * r2 = x2 * y2 - * r3 = 3 * r2 - */ - __pyx_v_r1 = (__pyx_v_r0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":166 - * r0 = 2 * y1 - * r1 = r0 * x2 - * r2 = x2 * y2 # <<<<<<<<<<<<<< - * r3 = 3 * r2 - * r4 = 2 * x1 - */ - __pyx_v_r2 = (__pyx_v_x2 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":167 - * r1 = r0 * x2 - * r2 = x2 * y2 - * r3 = 3 * r2 # <<<<<<<<<<<<<< - * r4 = 2 * x1 - * r5 = 3 * y0 - */ - __pyx_v_r3 = (3.0 * __pyx_v_r2); - - /* "fontTools/pens/momentsPen.py":168 - * r2 = x2 * y2 - * r3 = 3 * r2 - * r4 = 2 * x1 # <<<<<<<<<<<<<< - * r5 = 3 * y0 - * r6 = x1**2 - */ - __pyx_v_r4 = (2.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":169 - * r3 = 3 * r2 - * r4 = 2 * x1 - * r5 = 3 * y0 # <<<<<<<<<<<<<< - * r6 = x1**2 - * r7 = x2**2 - */ - __pyx_v_r5 = (3.0 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":170 - * r4 = 2 * x1 - * r5 = 3 * y0 - * r6 = x1**2 # <<<<<<<<<<<<<< - * r7 = x2**2 - * r8 = 4 * y1 - */ - __pyx_v_r6 = pow(__pyx_v_x1, 2.0); - - /* "fontTools/pens/momentsPen.py":171 - * r5 = 3 * y0 - * r6 = x1**2 - * r7 = x2**2 # <<<<<<<<<<<<<< - * r8 = 4 * y1 - * r9 = 10 * y2 - */ - __pyx_v_r7 = pow(__pyx_v_x2, 2.0); - - /* "fontTools/pens/momentsPen.py":172 - * r6 = x1**2 - * r7 = x2**2 - * r8 = 4 * y1 # <<<<<<<<<<<<<< - * r9 = 10 * y2 - * r10 = 2 * y2 - */ - __pyx_v_r8 = (4.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":173 - * r7 = x2**2 - * r8 = 4 * y1 - * r9 = 10 * y2 # <<<<<<<<<<<<<< - * r10 = 2 * y2 - * r11 = r4 * x2 - */ - __pyx_v_r9 = (10.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":174 - * r8 = 4 * y1 - * r9 = 10 * y2 - * r10 = 2 * y2 # <<<<<<<<<<<<<< - * r11 = r4 * x2 - * r12 = x0**2 - */ - __pyx_v_r10 = (2.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":175 - * r9 = 10 * y2 - * r10 = 2 * y2 - * r11 = r4 * x2 # <<<<<<<<<<<<<< - * r12 = x0**2 - * r13 = 10 * y0 - */ - __pyx_v_r11 = (__pyx_v_r4 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":176 - * r10 = 2 * y2 - * r11 = r4 * x2 - * r12 = x0**2 # <<<<<<<<<<<<<< - * r13 = 10 * y0 - * r14 = r4 * y2 - */ - __pyx_v_r12 = pow(__pyx_v_x0, 2.0); - - /* "fontTools/pens/momentsPen.py":177 - * r11 = r4 * x2 - * r12 = x0**2 - * r13 = 10 * y0 # <<<<<<<<<<<<<< - * r14 = r4 * y2 - * r15 = x2 * y0 - */ - __pyx_v_r13 = (10.0 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":178 - * r12 = x0**2 - * r13 = 10 * y0 - * r14 = r4 * y2 # <<<<<<<<<<<<<< - * r15 = x2 * y0 - * r16 = 4 * x1 - */ - __pyx_v_r14 = (__pyx_v_r4 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":179 - * r13 = 10 * y0 - * r14 = r4 * y2 - * r15 = x2 * y0 # <<<<<<<<<<<<<< - * r16 = 4 * x1 - * r17 = r0 * x1 + r2 - */ - __pyx_v_r15 = (__pyx_v_x2 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":180 - * r14 = r4 * y2 - * r15 = x2 * y0 - * r16 = 4 * x1 # <<<<<<<<<<<<<< - * r17 = r0 * x1 + r2 - * r18 = r2 * r8 - */ - __pyx_v_r16 = (4.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":181 - * r15 = x2 * y0 - * r16 = 4 * x1 - * r17 = r0 * x1 + r2 # <<<<<<<<<<<<<< - * r18 = r2 * r8 - * r19 = y1**2 - */ - __pyx_v_r17 = ((__pyx_v_r0 * __pyx_v_x1) + __pyx_v_r2); - - /* "fontTools/pens/momentsPen.py":182 - * r16 = 4 * x1 - * r17 = r0 * x1 + r2 - * r18 = r2 * r8 # <<<<<<<<<<<<<< - * r19 = y1**2 - * r20 = 2 * r19 - */ - __pyx_v_r18 = (__pyx_v_r2 * __pyx_v_r8); - - /* "fontTools/pens/momentsPen.py":183 - * r17 = r0 * x1 + r2 - * r18 = r2 * r8 - * r19 = y1**2 # <<<<<<<<<<<<<< - * r20 = 2 * r19 - * r21 = y2**2 - */ - __pyx_v_r19 = pow(__pyx_v_y1, 2.0); - - /* "fontTools/pens/momentsPen.py":184 - * r18 = r2 * r8 - * r19 = y1**2 - * r20 = 2 * r19 # <<<<<<<<<<<<<< - * r21 = y2**2 - * r22 = r21 * x2 - */ - __pyx_v_r20 = (2.0 * __pyx_v_r19); - - /* "fontTools/pens/momentsPen.py":185 - * r19 = y1**2 - * r20 = 2 * r19 - * r21 = y2**2 # <<<<<<<<<<<<<< - * r22 = r21 * x2 - * r23 = 5 * r22 - */ - __pyx_v_r21 = pow(__pyx_v_y2, 2.0); - - /* "fontTools/pens/momentsPen.py":186 - * r20 = 2 * r19 - * r21 = y2**2 - * r22 = r21 * x2 # <<<<<<<<<<<<<< - * r23 = 5 * r22 - * r24 = y0**2 - */ - __pyx_v_r22 = (__pyx_v_r21 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":187 - * r21 = y2**2 - * r22 = r21 * x2 - * r23 = 5 * r22 # <<<<<<<<<<<<<< - * r24 = y0**2 - * r25 = y0 * y2 - */ - __pyx_v_r23 = (5.0 * __pyx_v_r22); - - /* "fontTools/pens/momentsPen.py":188 - * r22 = r21 * x2 - * r23 = 5 * r22 - * r24 = y0**2 # <<<<<<<<<<<<<< - * r25 = y0 * y2 - * r26 = 5 * r24 - */ - __pyx_v_r24 = pow(__pyx_v_y0, 2.0); - - /* "fontTools/pens/momentsPen.py":189 - * r23 = 5 * r22 - * r24 = y0**2 - * r25 = y0 * y2 # <<<<<<<<<<<<<< - * r26 = 5 * r24 - * r27 = x1**3 - */ - __pyx_v_r25 = (__pyx_v_y0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":190 - * r24 = y0**2 - * r25 = y0 * y2 - * r26 = 5 * r24 # <<<<<<<<<<<<<< - * r27 = x1**3 - * r28 = x2**3 - */ - __pyx_v_r26 = (5.0 * __pyx_v_r24); - - /* "fontTools/pens/momentsPen.py":191 - * r25 = y0 * y2 - * r26 = 5 * r24 - * r27 = x1**3 # <<<<<<<<<<<<<< - * r28 = x2**3 - * r29 = 30 * y1 - */ - __pyx_v_r27 = pow(__pyx_v_x1, 3.0); - - /* "fontTools/pens/momentsPen.py":192 - * r26 = 5 * r24 - * r27 = x1**3 - * r28 = x2**3 # <<<<<<<<<<<<<< - * r29 = 30 * y1 - * r30 = 6 * y1 - */ - __pyx_v_r28 = pow(__pyx_v_x2, 3.0); - - /* "fontTools/pens/momentsPen.py":193 - * r27 = x1**3 - * r28 = x2**3 - * r29 = 30 * y1 # <<<<<<<<<<<<<< - * r30 = 6 * y1 - * r31 = 10 * r7 * x1 - */ - __pyx_v_r29 = (30.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":194 - * r28 = x2**3 - * r29 = 30 * y1 - * r30 = 6 * y1 # <<<<<<<<<<<<<< - * r31 = 10 * r7 * x1 - * r32 = 5 * y2 - */ - __pyx_v_r30 = (6.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":195 - * r29 = 30 * y1 - * r30 = 6 * y1 - * r31 = 10 * r7 * x1 # <<<<<<<<<<<<<< - * r32 = 5 * y2 - * r33 = 12 * r6 - */ - __pyx_v_r31 = ((10.0 * __pyx_v_r7) * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":196 - * r30 = 6 * y1 - * r31 = 10 * r7 * x1 - * r32 = 5 * y2 # <<<<<<<<<<<<<< - * r33 = 12 * r6 - * r34 = 30 * x1 - */ - __pyx_v_r32 = (5.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":197 - * r31 = 10 * r7 * x1 - * r32 = 5 * y2 - * r33 = 12 * r6 # <<<<<<<<<<<<<< - * r34 = 30 * x1 - * r35 = x1 * y1 - */ - __pyx_v_r33 = (12.0 * __pyx_v_r6); - - /* "fontTools/pens/momentsPen.py":198 - * r32 = 5 * y2 - * r33 = 12 * r6 - * r34 = 30 * x1 # <<<<<<<<<<<<<< - * r35 = x1 * y1 - * r36 = r3 + 20 * r35 - */ - __pyx_v_r34 = (30.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":199 - * r33 = 12 * r6 - * r34 = 30 * x1 - * r35 = x1 * y1 # <<<<<<<<<<<<<< - * r36 = r3 + 20 * r35 - * r37 = 12 * x1 - */ - __pyx_v_r35 = (__pyx_v_x1 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":200 - * r34 = 30 * x1 - * r35 = x1 * y1 - * r36 = r3 + 20 * r35 # <<<<<<<<<<<<<< - * r37 = 12 * x1 - * r38 = 20 * r6 - */ - __pyx_v_r36 = (__pyx_v_r3 + (20.0 * __pyx_v_r35)); - - /* "fontTools/pens/momentsPen.py":201 - * r35 = x1 * y1 - * r36 = r3 + 20 * r35 - * r37 = 12 * x1 # <<<<<<<<<<<<<< - * r38 = 20 * r6 - * r39 = 8 * r6 * y1 - */ - __pyx_v_r37 = (12.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":202 - * r36 = r3 + 20 * r35 - * r37 = 12 * x1 - * r38 = 20 * r6 # <<<<<<<<<<<<<< - * r39 = 8 * r6 * y1 - * r40 = r32 * r7 - */ - __pyx_v_r38 = (20.0 * __pyx_v_r6); - - /* "fontTools/pens/momentsPen.py":203 - * r37 = 12 * x1 - * r38 = 20 * r6 - * r39 = 8 * r6 * y1 # <<<<<<<<<<<<<< - * r40 = r32 * r7 - * r41 = 60 * y1 - */ - __pyx_v_r39 = ((8.0 * __pyx_v_r6) * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":204 - * r38 = 20 * r6 - * r39 = 8 * r6 * y1 - * r40 = r32 * r7 # <<<<<<<<<<<<<< - * r41 = 60 * y1 - * r42 = 20 * r19 - */ - __pyx_v_r40 = (__pyx_v_r32 * __pyx_v_r7); - - /* "fontTools/pens/momentsPen.py":205 - * r39 = 8 * r6 * y1 - * r40 = r32 * r7 - * r41 = 60 * y1 # <<<<<<<<<<<<<< - * r42 = 20 * r19 - * r43 = 4 * r19 - */ - __pyx_v_r41 = (60.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":206 - * r40 = r32 * r7 - * r41 = 60 * y1 - * r42 = 20 * r19 # <<<<<<<<<<<<<< - * r43 = 4 * r19 - * r44 = 15 * r21 - */ - __pyx_v_r42 = (20.0 * __pyx_v_r19); - - /* "fontTools/pens/momentsPen.py":207 - * r41 = 60 * y1 - * r42 = 20 * r19 - * r43 = 4 * r19 # <<<<<<<<<<<<<< - * r44 = 15 * r21 - * r45 = 12 * x2 - */ - __pyx_v_r43 = (4.0 * __pyx_v_r19); - - /* "fontTools/pens/momentsPen.py":208 - * r42 = 20 * r19 - * r43 = 4 * r19 - * r44 = 15 * r21 # <<<<<<<<<<<<<< - * r45 = 12 * x2 - * r46 = 12 * y2 - */ - __pyx_v_r44 = (15.0 * __pyx_v_r21); - - /* "fontTools/pens/momentsPen.py":209 - * r43 = 4 * r19 - * r44 = 15 * r21 - * r45 = 12 * x2 # <<<<<<<<<<<<<< - * r46 = 12 * y2 - * r47 = 6 * x1 - */ - __pyx_v_r45 = (12.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":210 - * r44 = 15 * r21 - * r45 = 12 * x2 - * r46 = 12 * y2 # <<<<<<<<<<<<<< - * r47 = 6 * x1 - * r48 = 8 * r19 * x1 + r23 - */ - __pyx_v_r46 = (12.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":211 - * r45 = 12 * x2 - * r46 = 12 * y2 - * r47 = 6 * x1 # <<<<<<<<<<<<<< - * r48 = 8 * r19 * x1 + r23 - * r49 = 8 * y1**3 - */ - __pyx_v_r47 = (6.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":212 - * r46 = 12 * y2 - * r47 = 6 * x1 - * r48 = 8 * r19 * x1 + r23 # <<<<<<<<<<<<<< - * r49 = 8 * y1**3 - * r50 = y2**3 - */ - __pyx_v_r48 = (((8.0 * __pyx_v_r19) * __pyx_v_x1) + __pyx_v_r23); - - /* "fontTools/pens/momentsPen.py":213 - * r47 = 6 * x1 - * r48 = 8 * r19 * x1 + r23 - * r49 = 8 * y1**3 # <<<<<<<<<<<<<< - * r50 = y2**3 - * r51 = y0**3 - */ - __pyx_v_r49 = (8.0 * pow(__pyx_v_y1, 3.0)); - - /* "fontTools/pens/momentsPen.py":214 - * r48 = 8 * r19 * x1 + r23 - * r49 = 8 * y1**3 - * r50 = y2**3 # <<<<<<<<<<<<<< - * r51 = y0**3 - * r52 = 10 * y1 - */ - __pyx_v_r50 = pow(__pyx_v_y2, 3.0); - - /* "fontTools/pens/momentsPen.py":215 - * r49 = 8 * y1**3 - * r50 = y2**3 - * r51 = y0**3 # <<<<<<<<<<<<<< - * r52 = 10 * y1 - * r53 = 12 * y1 - */ - __pyx_v_r51 = pow(__pyx_v_y0, 3.0); - - /* "fontTools/pens/momentsPen.py":216 - * r50 = y2**3 - * r51 = y0**3 - * r52 = 10 * y1 # <<<<<<<<<<<<<< - * r53 = 12 * y1 - * - */ - __pyx_v_r52 = (10.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":217 - * r51 = y0**3 - * r52 = 10 * y1 - * r53 = 12 * y1 # <<<<<<<<<<<<<< - * - * self.area += ( - */ - __pyx_v_r53 = (12.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":219 - * r53 = 12 * y1 - * - * self.area += ( # <<<<<<<<<<<<<< - * -r1 / 6 - * - r3 / 6 - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_area); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 219, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":224 - * + x0 * (r0 + r5 + y2) / 6 - * + x1 * y2 / 3 - * - y0 * (r4 + x2) / 6 # <<<<<<<<<<<<<< - * ) - * self.momentX += ( - */ - __pyx_t_3 = PyFloat_FromDouble(((((((-__pyx_v_r1) / 6.0) - (__pyx_v_r3 / 6.0)) + ((__pyx_v_x0 * ((__pyx_v_r0 + __pyx_v_r5) + __pyx_v_y2)) / 6.0)) + ((__pyx_v_x1 * __pyx_v_y2) / 3.0)) - ((__pyx_v_y0 * (__pyx_v_r4 + __pyx_v_x2)) / 6.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 224, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":219 - * r53 = 12 * y1 - * - * self.area += ( # <<<<<<<<<<<<<< - * -r1 / 6 - * - r3 / 6 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 219, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_t_2) < 0) __PYX_ERR(0, 219, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":226 - * - y0 * (r4 + x2) / 6 - * ) - * self.momentX += ( # <<<<<<<<<<<<<< - * -r11 * (-r10 + y1) / 30 - * + r12 * (r13 + r8 + y2) / 30 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":233 - * - r7 * r9 / 30 - * + x0 * (r14 - r15 - r16 * y0 + r17) / 30 - * - y0 * (r11 + 2 * r6 + r7) / 30 # <<<<<<<<<<<<<< - * ) - * self.momentY += ( - */ - __pyx_t_3 = PyFloat_FromDouble((((((((((-__pyx_v_r11) * ((-__pyx_v_r10) + __pyx_v_y1)) / 30.0) + ((__pyx_v_r12 * ((__pyx_v_r13 + __pyx_v_r8) + __pyx_v_y2)) / 30.0)) + ((__pyx_v_r6 * __pyx_v_y2) / 15.0)) - ((__pyx_v_r7 * __pyx_v_r8) / 30.0)) - ((__pyx_v_r7 * __pyx_v_r9) / 30.0)) + ((__pyx_v_x0 * (((__pyx_v_r14 - __pyx_v_r15) - (__pyx_v_r16 * __pyx_v_y0)) + __pyx_v_r17)) / 30.0)) - ((__pyx_v_y0 * ((__pyx_v_r11 + (2.0 * __pyx_v_r6)) + __pyx_v_r7)) / 30.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 233, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":226 - * - y0 * (r4 + x2) / 6 - * ) - * self.momentX += ( # <<<<<<<<<<<<<< - * -r11 * (-r10 + y1) / 30 - * + r12 * (r13 + r8 + y2) / 30 - */ - __pyx_t_1 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_t_1) < 0) __PYX_ERR(0, 226, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":235 - * - y0 * (r11 + 2 * r6 + r7) / 30 - * ) - * self.momentY += ( # <<<<<<<<<<<<<< - * -r18 / 30 - * - r20 * x2 / 30 - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentY); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":242 - * + x0 * (r0 * y2 + r20 + r21 + r25 + r26 + r8 * y0) / 30 - * + x1 * y2 * (r10 + y1) / 15 - * - y0 * (r1 + r17) / 30 # <<<<<<<<<<<<<< - * ) - * self.momentXX += ( - */ - __pyx_t_3 = PyFloat_FromDouble(((((((((-__pyx_v_r18) / 30.0) - ((__pyx_v_r20 * __pyx_v_x2) / 30.0)) - (__pyx_v_r23 / 30.0)) - ((__pyx_v_r24 * (__pyx_v_r16 + __pyx_v_x2)) / 30.0)) + ((__pyx_v_x0 * ((((((__pyx_v_r0 * __pyx_v_y2) + __pyx_v_r20) + __pyx_v_r21) + __pyx_v_r25) + __pyx_v_r26) + (__pyx_v_r8 * __pyx_v_y0))) / 30.0)) + (((__pyx_v_x1 * __pyx_v_y2) * (__pyx_v_r10 + __pyx_v_y1)) / 15.0)) - ((__pyx_v_y0 * (__pyx_v_r1 + __pyx_v_r17)) / 30.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 242, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":235 - * - y0 * (r11 + 2 * r6 + r7) / 30 - * ) - * self.momentY += ( # <<<<<<<<<<<<<< - * -r18 / 30 - * - r20 * x2 / 30 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_t_2) < 0) __PYX_ERR(0, 235, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":244 - * - y0 * (r1 + r17) / 30 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * r12 * (r1 - 5 * r15 - r34 * y0 + r36 + r9 * x1) / 420 - * + 2 * r27 * y2 / 105 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":264 - * ) - * / 420 - * - y0 * (8 * r27 + 5 * r28 + r31 + r33 * x2) / 420 # <<<<<<<<<<<<<< - * ) - * self.momentXY += ( - */ - __pyx_t_3 = PyFloat_FromDouble(((((((((((__pyx_v_r12 * ((((__pyx_v_r1 - (5.0 * __pyx_v_r15)) - (__pyx_v_r34 * __pyx_v_y0)) + __pyx_v_r36) + (__pyx_v_r9 * __pyx_v_x1))) / 420.0) + (((2.0 * __pyx_v_r27) * __pyx_v_y2) / 105.0)) - ((__pyx_v_r28 * __pyx_v_r29) / 420.0)) - ((__pyx_v_r28 * __pyx_v_y2) / 4.0)) - ((__pyx_v_r31 * (__pyx_v_r0 - (3.0 * __pyx_v_y2))) / 420.0)) - (((__pyx_v_r6 * __pyx_v_x2) * (__pyx_v_r0 - __pyx_v_r32)) / 105.0)) + ((pow(__pyx_v_x0, 3.0) * ((__pyx_v_r30 + (21.0 * __pyx_v_y0)) + __pyx_v_y2)) / 84.0)) - ((__pyx_v_x0 * ((((((((__pyx_v_r0 * __pyx_v_r7) + (__pyx_v_r15 * __pyx_v_r37)) - (__pyx_v_r2 * __pyx_v_r37)) - (__pyx_v_r33 * __pyx_v_y2)) + (__pyx_v_r38 * __pyx_v_y0)) - __pyx_v_r39) - __pyx_v_r40) + (__pyx_v_r5 * __pyx_v_r7))) / 420.0)) - ((__pyx_v_y0 * ((((8.0 * __pyx_v_r27) + (5.0 * __pyx_v_r28)) + __pyx_v_r31) + (__pyx_v_r33 * __pyx_v_x2))) / 420.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 264, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":244 - * - y0 * (r1 + r17) / 30 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * r12 * (r1 - 5 * r15 - r34 * y0 + r36 + r9 * x1) / 420 - * + 2 * r27 * y2 / 105 - */ - __pyx_t_1 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_t_1) < 0) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":266 - * - y0 * (8 * r27 + 5 * r28 + r31 + r33 * x2) / 420 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * r12 * (r13 * y2 + 3 * r21 + 105 * r24 + r41 * y0 + r42 + r46 * y1) / 840 - * - r16 * x2 * (r43 - r44) / 840 - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXY); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 266, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":286 - * ) - * / 420 - * - y0 * (r16 * r2 + r30 * r7 + r35 * r45 + r39 + r40) / 420 # <<<<<<<<<<<<<< - * ) - * self.momentYY += ( - */ - __pyx_t_3 = PyFloat_FromDouble(((((((((((__pyx_v_r12 * ((((((__pyx_v_r13 * __pyx_v_y2) + (3.0 * __pyx_v_r21)) + (105.0 * __pyx_v_r24)) + (__pyx_v_r41 * __pyx_v_y0)) + __pyx_v_r42) + (__pyx_v_r46 * __pyx_v_y1))) / 840.0) - (((__pyx_v_r16 * __pyx_v_x2) * (__pyx_v_r43 - __pyx_v_r44)) / 840.0)) - ((__pyx_v_r21 * __pyx_v_r7) / 8.0)) - ((__pyx_v_r24 * ((__pyx_v_r38 + (__pyx_v_r45 * __pyx_v_x1)) + (3.0 * __pyx_v_r7))) / 840.0)) - (((__pyx_v_r41 * __pyx_v_r7) * __pyx_v_y2) / 840.0)) - ((__pyx_v_r42 * __pyx_v_r7) / 840.0)) + (((__pyx_v_r6 * __pyx_v_y2) * (__pyx_v_r32 + __pyx_v_r8)) / 210.0)) + ((__pyx_v_x0 * (((((((((-__pyx_v_r15) * __pyx_v_r8) + (__pyx_v_r16 * __pyx_v_r25)) + __pyx_v_r18) + (__pyx_v_r21 * __pyx_v_r47)) - (__pyx_v_r24 * __pyx_v_r34)) - (__pyx_v_r26 * __pyx_v_x2)) + (__pyx_v_r35 * __pyx_v_r46)) + __pyx_v_r48)) / 420.0)) - ((__pyx_v_y0 * (((((__pyx_v_r16 * __pyx_v_r2) + (__pyx_v_r30 * __pyx_v_r7)) + (__pyx_v_r35 * __pyx_v_r45)) + __pyx_v_r39) + __pyx_v_r40)) / 420.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":266 - * - y0 * (8 * r27 + 5 * r28 + r31 + r33 * x2) / 420 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * r12 * (r13 * y2 + 3 * r21 + 105 * r24 + r41 * y0 + r42 + r46 * y1) / 840 - * - r16 * x2 * (r43 - r44) / 840 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 266, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_t_2) < 0) __PYX_ERR(0, 266, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":288 - * - y0 * (r16 * r2 + r30 * r7 + r35 * r45 + r39 + r40) / 420 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r2 * r42 / 420 - * - r22 * r29 / 420 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentYY); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":310 - * / 420 - * + x1 * y2 * (r43 + r44 + r9 * y1) / 210 - * - y0 * (r19 * r45 + r2 * r53 - r21 * r4 + r48) / 420 # <<<<<<<<<<<<<< - * ) - * - */ - __pyx_t_3 = PyFloat_FromDouble((((((((((((-__pyx_v_r2) * __pyx_v_r42) / 420.0) - ((__pyx_v_r22 * __pyx_v_r29) / 420.0)) - ((__pyx_v_r24 * ((__pyx_v_r14 + __pyx_v_r36) + (__pyx_v_r52 * __pyx_v_x2))) / 420.0)) - ((__pyx_v_r49 * __pyx_v_x2) / 420.0)) - ((__pyx_v_r50 * __pyx_v_x2) / 12.0)) - ((__pyx_v_r51 * (__pyx_v_r47 + __pyx_v_x2)) / 84.0)) + ((__pyx_v_x0 * ((((((((((__pyx_v_r19 * __pyx_v_r46) + (__pyx_v_r21 * __pyx_v_r5)) + (__pyx_v_r21 * __pyx_v_r52)) + (__pyx_v_r24 * __pyx_v_r29)) + (__pyx_v_r25 * __pyx_v_r53)) + (__pyx_v_r26 * __pyx_v_y2)) + (__pyx_v_r42 * __pyx_v_y0)) + __pyx_v_r49) + (5.0 * __pyx_v_r50)) + (35.0 * __pyx_v_r51))) / 420.0)) + (((__pyx_v_x1 * __pyx_v_y2) * ((__pyx_v_r43 + __pyx_v_r44) + (__pyx_v_r9 * __pyx_v_y1))) / 210.0)) - ((__pyx_v_y0 * ((((__pyx_v_r19 * __pyx_v_r45) + (__pyx_v_r2 * __pyx_v_r53)) - (__pyx_v_r21 * __pyx_v_r4)) + __pyx_v_r48)) / 420.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 310, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":288 - * - y0 * (r16 * r2 + r30 * r7 + r35 * r45 + r39 + r40) / 420 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r2 * r42 / 420 - * - r22 * r29 / 420 - */ - __pyx_t_1 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_t_1) < 0) __PYX_ERR(0, 288, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":159 - * @cython.locals(x1=cython.double, y1=cython.double) - * @cython.locals(x2=cython.double, y2=cython.double) - * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._qCurveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/pens/momentsPen.py":450 - * @cython.locals(x2=cython.double, y2=cython.double) - * @cython.locals(x3=cython.double, y3=cython.double) - * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne[] = "MomentsPen._curveToOne(self, p1, p2, p3)"; -static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne = {"_curveToOne", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne}; -static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_self = 0; - PyObject *__pyx_v_p1 = 0; - PyObject *__pyx_v_p2 = 0; - PyObject *__pyx_v_p3 = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_curveToOne (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p1,&__pyx_n_s_p2,&__pyx_n_s_p3,0}; - PyObject* values[4] = {0,0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p1)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, 1); __PYX_ERR(0, 450, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p2)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, 2); __PYX_ERR(0, 450, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p3)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, 3); __PYX_ERR(0, 450, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_curveToOne") < 0)) __PYX_ERR(0, 450, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 4) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - } - __pyx_v_self = values[0]; - __pyx_v_p1 = values[1]; - __pyx_v_p2 = values[2]; - __pyx_v_p3 = values[3]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 450, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._curveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne(__pyx_self, __pyx_v_self, __pyx_v_p1, __pyx_v_p2, __pyx_v_p3); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2, PyObject *__pyx_v_p3) { - double __pyx_v_x3; - double __pyx_v_y3; - double __pyx_v_x2; - double __pyx_v_y2; - double __pyx_v_x1; - double __pyx_v_y1; - double __pyx_v_x0; - double __pyx_v_y0; - double __pyx_v_r132; - double __pyx_v_r131; - double __pyx_v_r130; - double __pyx_v_r129; - double __pyx_v_r128; - double __pyx_v_r127; - double __pyx_v_r126; - double __pyx_v_r125; - double __pyx_v_r124; - double __pyx_v_r123; - double __pyx_v_r122; - double __pyx_v_r121; - double __pyx_v_r120; - double __pyx_v_r119; - double __pyx_v_r118; - double __pyx_v_r117; - double __pyx_v_r116; - double __pyx_v_r115; - double __pyx_v_r114; - double __pyx_v_r113; - double __pyx_v_r112; - double __pyx_v_r111; - double __pyx_v_r110; - double __pyx_v_r109; - double __pyx_v_r108; - double __pyx_v_r107; - double __pyx_v_r106; - double __pyx_v_r105; - double __pyx_v_r104; - double __pyx_v_r103; - double __pyx_v_r102; - double __pyx_v_r101; - double __pyx_v_r100; - double __pyx_v_r99; - double __pyx_v_r98; - double __pyx_v_r97; - double __pyx_v_r96; - double __pyx_v_r95; - double __pyx_v_r94; - double __pyx_v_r93; - double __pyx_v_r92; - double __pyx_v_r91; - double __pyx_v_r90; - double __pyx_v_r89; - double __pyx_v_r88; - double __pyx_v_r87; - double __pyx_v_r86; - double __pyx_v_r85; - double __pyx_v_r84; - double __pyx_v_r83; - double __pyx_v_r82; - double __pyx_v_r81; - double __pyx_v_r80; - double __pyx_v_r79; - double __pyx_v_r78; - double __pyx_v_r77; - double __pyx_v_r76; - double __pyx_v_r75; - double __pyx_v_r74; - double __pyx_v_r73; - double __pyx_v_r72; - double __pyx_v_r71; - double __pyx_v_r70; - double __pyx_v_r69; - double __pyx_v_r68; - double __pyx_v_r67; - double __pyx_v_r66; - double __pyx_v_r65; - double __pyx_v_r64; - double __pyx_v_r63; - double __pyx_v_r62; - double __pyx_v_r61; - double __pyx_v_r60; - double __pyx_v_r59; - double __pyx_v_r58; - double __pyx_v_r57; - double __pyx_v_r56; - double __pyx_v_r55; - double __pyx_v_r54; - double __pyx_v_r53; - double __pyx_v_r52; - double __pyx_v_r51; - double __pyx_v_r50; - double __pyx_v_r49; - double __pyx_v_r48; - double __pyx_v_r47; - double __pyx_v_r46; - double __pyx_v_r45; - double __pyx_v_r44; - double __pyx_v_r43; - double __pyx_v_r42; - double __pyx_v_r41; - double __pyx_v_r40; - double __pyx_v_r39; - double __pyx_v_r38; - double __pyx_v_r37; - double __pyx_v_r36; - double __pyx_v_r35; - double __pyx_v_r34; - double __pyx_v_r33; - double __pyx_v_r32; - double __pyx_v_r31; - double __pyx_v_r30; - double __pyx_v_r29; - double __pyx_v_r28; - double __pyx_v_r27; - double __pyx_v_r26; - double __pyx_v_r25; - double __pyx_v_r24; - double __pyx_v_r23; - double __pyx_v_r22; - double __pyx_v_r21; - double __pyx_v_r20; - double __pyx_v_r19; - double __pyx_v_r18; - double __pyx_v_r17; - double __pyx_v_r16; - double __pyx_v_r15; - double __pyx_v_r14; - double __pyx_v_r13; - double __pyx_v_r12; - double __pyx_v_r11; - double __pyx_v_r10; - double __pyx_v_r9; - double __pyx_v_r8; - double __pyx_v_r7; - double __pyx_v_r6; - double __pyx_v_r5; - double __pyx_v_r4; - double __pyx_v_r3; - double __pyx_v_r2; - double __pyx_v_r1; - double __pyx_v_r0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *(*__pyx_t_5)(PyObject *); - double __pyx_t_6; - double __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_curveToOne", 0); - - /* "fontTools/pens/momentsPen.py":451 - * @cython.locals(x3=cython.double, y3=cython.double) - * def _curveToOne(self, p1, p2, p3): - * x0, y0 = self._getCurrentPoint() # <<<<<<<<<<<<<< - * x1, y1 = p1 - * x2, y2 = p2 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_3)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 451, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_2 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_4 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_5 = Py_TYPE(__pyx_t_4)->tp_iternext; - index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 451, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - goto __pyx_L4_unpacking_done; - __pyx_L3_unpacking_failed:; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 451, __pyx_L1_error) - __pyx_L4_unpacking_done:; - } - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_2); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 451, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x0 = __pyx_t_6; - __pyx_v_y0 = __pyx_t_7; - - /* "fontTools/pens/momentsPen.py":452 - * def _curveToOne(self, p1, p2, p3): - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 # <<<<<<<<<<<<<< - * x2, y2 = p2 - * x3, y3 = p3 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p1))) || (PyList_CheckExact(__pyx_v_p1))) { - PyObject* sequence = __pyx_v_p1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 452, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 452, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 452, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 452, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 452, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L6_unpacking_done; - __pyx_L5_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 452, __pyx_L1_error) - __pyx_L6_unpacking_done:; - } - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 452, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 452, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x1 = __pyx_t_7; - __pyx_v_y1 = __pyx_t_6; - - /* "fontTools/pens/momentsPen.py":453 - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - * x2, y2 = p2 # <<<<<<<<<<<<<< - * x3, y3 = p3 - * - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p2))) || (PyList_CheckExact(__pyx_v_p2))) { - PyObject* sequence = __pyx_v_p2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 453, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_3 = PyList_GET_ITEM(sequence, 0); - __pyx_t_1 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_1); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 453, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 453, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 453, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 1; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 453, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L8_unpacking_done; - __pyx_L7_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 453, __pyx_L1_error) - __pyx_L8_unpacking_done:; - } - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 453, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 453, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_x2 = __pyx_t_6; - __pyx_v_y2 = __pyx_t_7; - - /* "fontTools/pens/momentsPen.py":454 - * x1, y1 = p1 - * x2, y2 = p2 - * x3, y3 = p3 # <<<<<<<<<<<<<< - * - * r0 = 6 * y2 - */ - if ((likely(PyTuple_CheckExact(__pyx_v_p3))) || (PyList_CheckExact(__pyx_v_p3))) { - PyObject* sequence = __pyx_v_p3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 454, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - } else { - __pyx_t_1 = PyList_GET_ITEM(sequence, 0); - __pyx_t_3 = PyList_GET_ITEM(sequence, 1); - } - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_3); - #else - __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 454, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 454, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - #endif - } else { - Py_ssize_t index = -1; - __pyx_t_2 = PyObject_GetIter(__pyx_v_p3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 454, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext; - index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_1); - index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 454, __pyx_L1_error) - __pyx_t_5 = NULL; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - goto __pyx_L10_unpacking_done; - __pyx_L9_unpacking_failed:; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 454, __pyx_L1_error) - __pyx_L10_unpacking_done:; - } - __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 454, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 454, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_x3 = __pyx_t_7; - __pyx_v_y3 = __pyx_t_6; - - /* "fontTools/pens/momentsPen.py":456 - * x3, y3 = p3 - * - * r0 = 6 * y2 # <<<<<<<<<<<<<< - * r1 = r0 * x3 - * r2 = 10 * y3 - */ - __pyx_v_r0 = (6.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":457 - * - * r0 = 6 * y2 - * r1 = r0 * x3 # <<<<<<<<<<<<<< - * r2 = 10 * y3 - * r3 = r2 * x3 - */ - __pyx_v_r1 = (__pyx_v_r0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":458 - * r0 = 6 * y2 - * r1 = r0 * x3 - * r2 = 10 * y3 # <<<<<<<<<<<<<< - * r3 = r2 * x3 - * r4 = 3 * y1 - */ - __pyx_v_r2 = (10.0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":459 - * r1 = r0 * x3 - * r2 = 10 * y3 - * r3 = r2 * x3 # <<<<<<<<<<<<<< - * r4 = 3 * y1 - * r5 = 6 * x1 - */ - __pyx_v_r3 = (__pyx_v_r2 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":460 - * r2 = 10 * y3 - * r3 = r2 * x3 - * r4 = 3 * y1 # <<<<<<<<<<<<<< - * r5 = 6 * x1 - * r6 = 3 * x2 - */ - __pyx_v_r4 = (3.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":461 - * r3 = r2 * x3 - * r4 = 3 * y1 - * r5 = 6 * x1 # <<<<<<<<<<<<<< - * r6 = 3 * x2 - * r7 = 6 * y1 - */ - __pyx_v_r5 = (6.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":462 - * r4 = 3 * y1 - * r5 = 6 * x1 - * r6 = 3 * x2 # <<<<<<<<<<<<<< - * r7 = 6 * y1 - * r8 = 3 * y2 - */ - __pyx_v_r6 = (3.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":463 - * r5 = 6 * x1 - * r6 = 3 * x2 - * r7 = 6 * y1 # <<<<<<<<<<<<<< - * r8 = 3 * y2 - * r9 = x2**2 - */ - __pyx_v_r7 = (6.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":464 - * r6 = 3 * x2 - * r7 = 6 * y1 - * r8 = 3 * y2 # <<<<<<<<<<<<<< - * r9 = x2**2 - * r10 = 45 * r9 - */ - __pyx_v_r8 = (3.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":465 - * r7 = 6 * y1 - * r8 = 3 * y2 - * r9 = x2**2 # <<<<<<<<<<<<<< - * r10 = 45 * r9 - * r11 = r10 * y3 - */ - __pyx_v_r9 = pow(__pyx_v_x2, 2.0); - - /* "fontTools/pens/momentsPen.py":466 - * r8 = 3 * y2 - * r9 = x2**2 - * r10 = 45 * r9 # <<<<<<<<<<<<<< - * r11 = r10 * y3 - * r12 = x3**2 - */ - __pyx_v_r10 = (45.0 * __pyx_v_r9); - - /* "fontTools/pens/momentsPen.py":467 - * r9 = x2**2 - * r10 = 45 * r9 - * r11 = r10 * y3 # <<<<<<<<<<<<<< - * r12 = x3**2 - * r13 = r12 * y2 - */ - __pyx_v_r11 = (__pyx_v_r10 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":468 - * r10 = 45 * r9 - * r11 = r10 * y3 - * r12 = x3**2 # <<<<<<<<<<<<<< - * r13 = r12 * y2 - * r14 = r12 * y3 - */ - __pyx_v_r12 = pow(__pyx_v_x3, 2.0); - - /* "fontTools/pens/momentsPen.py":469 - * r11 = r10 * y3 - * r12 = x3**2 - * r13 = r12 * y2 # <<<<<<<<<<<<<< - * r14 = r12 * y3 - * r15 = 7 * y3 - */ - __pyx_v_r13 = (__pyx_v_r12 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":470 - * r12 = x3**2 - * r13 = r12 * y2 - * r14 = r12 * y3 # <<<<<<<<<<<<<< - * r15 = 7 * y3 - * r16 = 15 * x3 - */ - __pyx_v_r14 = (__pyx_v_r12 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":471 - * r13 = r12 * y2 - * r14 = r12 * y3 - * r15 = 7 * y3 # <<<<<<<<<<<<<< - * r16 = 15 * x3 - * r17 = r16 * x2 - */ - __pyx_v_r15 = (7.0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":472 - * r14 = r12 * y3 - * r15 = 7 * y3 - * r16 = 15 * x3 # <<<<<<<<<<<<<< - * r17 = r16 * x2 - * r18 = x1**2 - */ - __pyx_v_r16 = (15.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":473 - * r15 = 7 * y3 - * r16 = 15 * x3 - * r17 = r16 * x2 # <<<<<<<<<<<<<< - * r18 = x1**2 - * r19 = 9 * r18 - */ - __pyx_v_r17 = (__pyx_v_r16 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":474 - * r16 = 15 * x3 - * r17 = r16 * x2 - * r18 = x1**2 # <<<<<<<<<<<<<< - * r19 = 9 * r18 - * r20 = x0**2 - */ - __pyx_v_r18 = pow(__pyx_v_x1, 2.0); - - /* "fontTools/pens/momentsPen.py":475 - * r17 = r16 * x2 - * r18 = x1**2 - * r19 = 9 * r18 # <<<<<<<<<<<<<< - * r20 = x0**2 - * r21 = 21 * y1 - */ - __pyx_v_r19 = (9.0 * __pyx_v_r18); - - /* "fontTools/pens/momentsPen.py":476 - * r18 = x1**2 - * r19 = 9 * r18 - * r20 = x0**2 # <<<<<<<<<<<<<< - * r21 = 21 * y1 - * r22 = 9 * r9 - */ - __pyx_v_r20 = pow(__pyx_v_x0, 2.0); - - /* "fontTools/pens/momentsPen.py":477 - * r19 = 9 * r18 - * r20 = x0**2 - * r21 = 21 * y1 # <<<<<<<<<<<<<< - * r22 = 9 * r9 - * r23 = r7 * x3 - */ - __pyx_v_r21 = (21.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":478 - * r20 = x0**2 - * r21 = 21 * y1 - * r22 = 9 * r9 # <<<<<<<<<<<<<< - * r23 = r7 * x3 - * r24 = 9 * y2 - */ - __pyx_v_r22 = (9.0 * __pyx_v_r9); - - /* "fontTools/pens/momentsPen.py":479 - * r21 = 21 * y1 - * r22 = 9 * r9 - * r23 = r7 * x3 # <<<<<<<<<<<<<< - * r24 = 9 * y2 - * r25 = r24 * x2 + r3 - */ - __pyx_v_r23 = (__pyx_v_r7 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":480 - * r22 = 9 * r9 - * r23 = r7 * x3 - * r24 = 9 * y2 # <<<<<<<<<<<<<< - * r25 = r24 * x2 + r3 - * r26 = 9 * x2 - */ - __pyx_v_r24 = (9.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":481 - * r23 = r7 * x3 - * r24 = 9 * y2 - * r25 = r24 * x2 + r3 # <<<<<<<<<<<<<< - * r26 = 9 * x2 - * r27 = x2 * y3 - */ - __pyx_v_r25 = ((__pyx_v_r24 * __pyx_v_x2) + __pyx_v_r3); - - /* "fontTools/pens/momentsPen.py":482 - * r24 = 9 * y2 - * r25 = r24 * x2 + r3 - * r26 = 9 * x2 # <<<<<<<<<<<<<< - * r27 = x2 * y3 - * r28 = -r26 * y1 + 15 * r27 - */ - __pyx_v_r26 = (9.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":483 - * r25 = r24 * x2 + r3 - * r26 = 9 * x2 - * r27 = x2 * y3 # <<<<<<<<<<<<<< - * r28 = -r26 * y1 + 15 * r27 - * r29 = 3 * x1 - */ - __pyx_v_r27 = (__pyx_v_x2 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":484 - * r26 = 9 * x2 - * r27 = x2 * y3 - * r28 = -r26 * y1 + 15 * r27 # <<<<<<<<<<<<<< - * r29 = 3 * x1 - * r30 = 45 * x1 - */ - __pyx_v_r28 = (((-__pyx_v_r26) * __pyx_v_y1) + (15.0 * __pyx_v_r27)); - - /* "fontTools/pens/momentsPen.py":485 - * r27 = x2 * y3 - * r28 = -r26 * y1 + 15 * r27 - * r29 = 3 * x1 # <<<<<<<<<<<<<< - * r30 = 45 * x1 - * r31 = 12 * x3 - */ - __pyx_v_r29 = (3.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":486 - * r28 = -r26 * y1 + 15 * r27 - * r29 = 3 * x1 - * r30 = 45 * x1 # <<<<<<<<<<<<<< - * r31 = 12 * x3 - * r32 = 45 * r18 - */ - __pyx_v_r30 = (45.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":487 - * r29 = 3 * x1 - * r30 = 45 * x1 - * r31 = 12 * x3 # <<<<<<<<<<<<<< - * r32 = 45 * r18 - * r33 = 5 * r12 - */ - __pyx_v_r31 = (12.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":488 - * r30 = 45 * x1 - * r31 = 12 * x3 - * r32 = 45 * r18 # <<<<<<<<<<<<<< - * r33 = 5 * r12 - * r34 = r8 * x3 - */ - __pyx_v_r32 = (45.0 * __pyx_v_r18); - - /* "fontTools/pens/momentsPen.py":489 - * r31 = 12 * x3 - * r32 = 45 * r18 - * r33 = 5 * r12 # <<<<<<<<<<<<<< - * r34 = r8 * x3 - * r35 = 105 * y0 - */ - __pyx_v_r33 = (5.0 * __pyx_v_r12); - - /* "fontTools/pens/momentsPen.py":490 - * r32 = 45 * r18 - * r33 = 5 * r12 - * r34 = r8 * x3 # <<<<<<<<<<<<<< - * r35 = 105 * y0 - * r36 = 30 * y0 - */ - __pyx_v_r34 = (__pyx_v_r8 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":491 - * r33 = 5 * r12 - * r34 = r8 * x3 - * r35 = 105 * y0 # <<<<<<<<<<<<<< - * r36 = 30 * y0 - * r37 = r36 * x2 - */ - __pyx_v_r35 = (105.0 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":492 - * r34 = r8 * x3 - * r35 = 105 * y0 - * r36 = 30 * y0 # <<<<<<<<<<<<<< - * r37 = r36 * x2 - * r38 = 5 * x3 - */ - __pyx_v_r36 = (30.0 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":493 - * r35 = 105 * y0 - * r36 = 30 * y0 - * r37 = r36 * x2 # <<<<<<<<<<<<<< - * r38 = 5 * x3 - * r39 = 15 * y3 - */ - __pyx_v_r37 = (__pyx_v_r36 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":494 - * r36 = 30 * y0 - * r37 = r36 * x2 - * r38 = 5 * x3 # <<<<<<<<<<<<<< - * r39 = 15 * y3 - * r40 = 5 * y3 - */ - __pyx_v_r38 = (5.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":495 - * r37 = r36 * x2 - * r38 = 5 * x3 - * r39 = 15 * y3 # <<<<<<<<<<<<<< - * r40 = 5 * y3 - * r41 = r40 * x3 - */ - __pyx_v_r39 = (15.0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":496 - * r38 = 5 * x3 - * r39 = 15 * y3 - * r40 = 5 * y3 # <<<<<<<<<<<<<< - * r41 = r40 * x3 - * r42 = x2 * y2 - */ - __pyx_v_r40 = (5.0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":497 - * r39 = 15 * y3 - * r40 = 5 * y3 - * r41 = r40 * x3 # <<<<<<<<<<<<<< - * r42 = x2 * y2 - * r43 = 18 * r42 - */ - __pyx_v_r41 = (__pyx_v_r40 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":498 - * r40 = 5 * y3 - * r41 = r40 * x3 - * r42 = x2 * y2 # <<<<<<<<<<<<<< - * r43 = 18 * r42 - * r44 = 45 * y1 - */ - __pyx_v_r42 = (__pyx_v_x2 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":499 - * r41 = r40 * x3 - * r42 = x2 * y2 - * r43 = 18 * r42 # <<<<<<<<<<<<<< - * r44 = 45 * y1 - * r45 = r41 + r43 + r44 * x1 - */ - __pyx_v_r43 = (18.0 * __pyx_v_r42); - - /* "fontTools/pens/momentsPen.py":500 - * r42 = x2 * y2 - * r43 = 18 * r42 - * r44 = 45 * y1 # <<<<<<<<<<<<<< - * r45 = r41 + r43 + r44 * x1 - * r46 = y2 * y3 - */ - __pyx_v_r44 = (45.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":501 - * r43 = 18 * r42 - * r44 = 45 * y1 - * r45 = r41 + r43 + r44 * x1 # <<<<<<<<<<<<<< - * r46 = y2 * y3 - * r47 = r46 * x3 - */ - __pyx_v_r45 = ((__pyx_v_r41 + __pyx_v_r43) + (__pyx_v_r44 * __pyx_v_x1)); - - /* "fontTools/pens/momentsPen.py":502 - * r44 = 45 * y1 - * r45 = r41 + r43 + r44 * x1 - * r46 = y2 * y3 # <<<<<<<<<<<<<< - * r47 = r46 * x3 - * r48 = y2**2 - */ - __pyx_v_r46 = (__pyx_v_y2 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":503 - * r45 = r41 + r43 + r44 * x1 - * r46 = y2 * y3 - * r47 = r46 * x3 # <<<<<<<<<<<<<< - * r48 = y2**2 - * r49 = 45 * r48 - */ - __pyx_v_r47 = (__pyx_v_r46 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":504 - * r46 = y2 * y3 - * r47 = r46 * x3 - * r48 = y2**2 # <<<<<<<<<<<<<< - * r49 = 45 * r48 - * r50 = r49 * x3 - */ - __pyx_v_r48 = pow(__pyx_v_y2, 2.0); - - /* "fontTools/pens/momentsPen.py":505 - * r47 = r46 * x3 - * r48 = y2**2 - * r49 = 45 * r48 # <<<<<<<<<<<<<< - * r50 = r49 * x3 - * r51 = y3**2 - */ - __pyx_v_r49 = (45.0 * __pyx_v_r48); - - /* "fontTools/pens/momentsPen.py":506 - * r48 = y2**2 - * r49 = 45 * r48 - * r50 = r49 * x3 # <<<<<<<<<<<<<< - * r51 = y3**2 - * r52 = r51 * x3 - */ - __pyx_v_r50 = (__pyx_v_r49 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":507 - * r49 = 45 * r48 - * r50 = r49 * x3 - * r51 = y3**2 # <<<<<<<<<<<<<< - * r52 = r51 * x3 - * r53 = y1**2 - */ - __pyx_v_r51 = pow(__pyx_v_y3, 2.0); - - /* "fontTools/pens/momentsPen.py":508 - * r50 = r49 * x3 - * r51 = y3**2 - * r52 = r51 * x3 # <<<<<<<<<<<<<< - * r53 = y1**2 - * r54 = 9 * r53 - */ - __pyx_v_r52 = (__pyx_v_r51 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":509 - * r51 = y3**2 - * r52 = r51 * x3 - * r53 = y1**2 # <<<<<<<<<<<<<< - * r54 = 9 * r53 - * r55 = y0**2 - */ - __pyx_v_r53 = pow(__pyx_v_y1, 2.0); - - /* "fontTools/pens/momentsPen.py":510 - * r52 = r51 * x3 - * r53 = y1**2 - * r54 = 9 * r53 # <<<<<<<<<<<<<< - * r55 = y0**2 - * r56 = 21 * x1 - */ - __pyx_v_r54 = (9.0 * __pyx_v_r53); - - /* "fontTools/pens/momentsPen.py":511 - * r53 = y1**2 - * r54 = 9 * r53 - * r55 = y0**2 # <<<<<<<<<<<<<< - * r56 = 21 * x1 - * r57 = 6 * x2 - */ - __pyx_v_r55 = pow(__pyx_v_y0, 2.0); - - /* "fontTools/pens/momentsPen.py":512 - * r54 = 9 * r53 - * r55 = y0**2 - * r56 = 21 * x1 # <<<<<<<<<<<<<< - * r57 = 6 * x2 - * r58 = r16 * y2 - */ - __pyx_v_r56 = (21.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":513 - * r55 = y0**2 - * r56 = 21 * x1 - * r57 = 6 * x2 # <<<<<<<<<<<<<< - * r58 = r16 * y2 - * r59 = r39 * y2 - */ - __pyx_v_r57 = (6.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":514 - * r56 = 21 * x1 - * r57 = 6 * x2 - * r58 = r16 * y2 # <<<<<<<<<<<<<< - * r59 = r39 * y2 - * r60 = 9 * r48 - */ - __pyx_v_r58 = (__pyx_v_r16 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":515 - * r57 = 6 * x2 - * r58 = r16 * y2 - * r59 = r39 * y2 # <<<<<<<<<<<<<< - * r60 = 9 * r48 - * r61 = r6 * y3 - */ - __pyx_v_r59 = (__pyx_v_r39 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":516 - * r58 = r16 * y2 - * r59 = r39 * y2 - * r60 = 9 * r48 # <<<<<<<<<<<<<< - * r61 = r6 * y3 - * r62 = 3 * y3 - */ - __pyx_v_r60 = (9.0 * __pyx_v_r48); - - /* "fontTools/pens/momentsPen.py":517 - * r59 = r39 * y2 - * r60 = 9 * r48 - * r61 = r6 * y3 # <<<<<<<<<<<<<< - * r62 = 3 * y3 - * r63 = r36 * y2 - */ - __pyx_v_r61 = (__pyx_v_r6 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":518 - * r60 = 9 * r48 - * r61 = r6 * y3 - * r62 = 3 * y3 # <<<<<<<<<<<<<< - * r63 = r36 * y2 - * r64 = y1 * y3 - */ - __pyx_v_r62 = (3.0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":519 - * r61 = r6 * y3 - * r62 = 3 * y3 - * r63 = r36 * y2 # <<<<<<<<<<<<<< - * r64 = y1 * y3 - * r65 = 45 * r53 - */ - __pyx_v_r63 = (__pyx_v_r36 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":520 - * r62 = 3 * y3 - * r63 = r36 * y2 - * r64 = y1 * y3 # <<<<<<<<<<<<<< - * r65 = 45 * r53 - * r66 = 5 * r51 - */ - __pyx_v_r64 = (__pyx_v_y1 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":521 - * r63 = r36 * y2 - * r64 = y1 * y3 - * r65 = 45 * r53 # <<<<<<<<<<<<<< - * r66 = 5 * r51 - * r67 = x2**3 - */ - __pyx_v_r65 = (45.0 * __pyx_v_r53); - - /* "fontTools/pens/momentsPen.py":522 - * r64 = y1 * y3 - * r65 = 45 * r53 - * r66 = 5 * r51 # <<<<<<<<<<<<<< - * r67 = x2**3 - * r68 = x3**3 - */ - __pyx_v_r66 = (5.0 * __pyx_v_r51); - - /* "fontTools/pens/momentsPen.py":523 - * r65 = 45 * r53 - * r66 = 5 * r51 - * r67 = x2**3 # <<<<<<<<<<<<<< - * r68 = x3**3 - * r69 = 630 * y2 - */ - __pyx_v_r67 = pow(__pyx_v_x2, 3.0); - - /* "fontTools/pens/momentsPen.py":524 - * r66 = 5 * r51 - * r67 = x2**3 - * r68 = x3**3 # <<<<<<<<<<<<<< - * r69 = 630 * y2 - * r70 = 126 * x3 - */ - __pyx_v_r68 = pow(__pyx_v_x3, 3.0); - - /* "fontTools/pens/momentsPen.py":525 - * r67 = x2**3 - * r68 = x3**3 - * r69 = 630 * y2 # <<<<<<<<<<<<<< - * r70 = 126 * x3 - * r71 = x1**3 - */ - __pyx_v_r69 = (630.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":526 - * r68 = x3**3 - * r69 = 630 * y2 - * r70 = 126 * x3 # <<<<<<<<<<<<<< - * r71 = x1**3 - * r72 = 126 * x2 - */ - __pyx_v_r70 = (126.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":527 - * r69 = 630 * y2 - * r70 = 126 * x3 - * r71 = x1**3 # <<<<<<<<<<<<<< - * r72 = 126 * x2 - * r73 = 63 * r9 - */ - __pyx_v_r71 = pow(__pyx_v_x1, 3.0); - - /* "fontTools/pens/momentsPen.py":528 - * r70 = 126 * x3 - * r71 = x1**3 - * r72 = 126 * x2 # <<<<<<<<<<<<<< - * r73 = 63 * r9 - * r74 = r73 * x3 - */ - __pyx_v_r72 = (126.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":529 - * r71 = x1**3 - * r72 = 126 * x2 - * r73 = 63 * r9 # <<<<<<<<<<<<<< - * r74 = r73 * x3 - * r75 = r15 * x3 + 15 * r42 - */ - __pyx_v_r73 = (63.0 * __pyx_v_r9); - - /* "fontTools/pens/momentsPen.py":530 - * r72 = 126 * x2 - * r73 = 63 * r9 - * r74 = r73 * x3 # <<<<<<<<<<<<<< - * r75 = r15 * x3 + 15 * r42 - * r76 = 630 * x1 - */ - __pyx_v_r74 = (__pyx_v_r73 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":531 - * r73 = 63 * r9 - * r74 = r73 * x3 - * r75 = r15 * x3 + 15 * r42 # <<<<<<<<<<<<<< - * r76 = 630 * x1 - * r77 = 14 * x3 - */ - __pyx_v_r75 = ((__pyx_v_r15 * __pyx_v_x3) + (15.0 * __pyx_v_r42)); - - /* "fontTools/pens/momentsPen.py":532 - * r74 = r73 * x3 - * r75 = r15 * x3 + 15 * r42 - * r76 = 630 * x1 # <<<<<<<<<<<<<< - * r77 = 14 * x3 - * r78 = 21 * r27 - */ - __pyx_v_r76 = (630.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":533 - * r75 = r15 * x3 + 15 * r42 - * r76 = 630 * x1 - * r77 = 14 * x3 # <<<<<<<<<<<<<< - * r78 = 21 * r27 - * r79 = 42 * x1 - */ - __pyx_v_r77 = (14.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":534 - * r76 = 630 * x1 - * r77 = 14 * x3 - * r78 = 21 * r27 # <<<<<<<<<<<<<< - * r79 = 42 * x1 - * r80 = 42 * x2 - */ - __pyx_v_r78 = (21.0 * __pyx_v_r27); - - /* "fontTools/pens/momentsPen.py":535 - * r77 = 14 * x3 - * r78 = 21 * r27 - * r79 = 42 * x1 # <<<<<<<<<<<<<< - * r80 = 42 * x2 - * r81 = x1 * y2 - */ - __pyx_v_r79 = (42.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":536 - * r78 = 21 * r27 - * r79 = 42 * x1 - * r80 = 42 * x2 # <<<<<<<<<<<<<< - * r81 = x1 * y2 - * r82 = 63 * r42 - */ - __pyx_v_r80 = (42.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":537 - * r79 = 42 * x1 - * r80 = 42 * x2 - * r81 = x1 * y2 # <<<<<<<<<<<<<< - * r82 = 63 * r42 - * r83 = x1 * y1 - */ - __pyx_v_r81 = (__pyx_v_x1 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":538 - * r80 = 42 * x2 - * r81 = x1 * y2 - * r82 = 63 * r42 # <<<<<<<<<<<<<< - * r83 = x1 * y1 - * r84 = r41 + r82 + 378 * r83 - */ - __pyx_v_r82 = (63.0 * __pyx_v_r42); - - /* "fontTools/pens/momentsPen.py":539 - * r81 = x1 * y2 - * r82 = 63 * r42 - * r83 = x1 * y1 # <<<<<<<<<<<<<< - * r84 = r41 + r82 + 378 * r83 - * r85 = x2 * x3 - */ - __pyx_v_r83 = (__pyx_v_x1 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":540 - * r82 = 63 * r42 - * r83 = x1 * y1 - * r84 = r41 + r82 + 378 * r83 # <<<<<<<<<<<<<< - * r85 = x2 * x3 - * r86 = r85 * y1 - */ - __pyx_v_r84 = ((__pyx_v_r41 + __pyx_v_r82) + (378.0 * __pyx_v_r83)); - - /* "fontTools/pens/momentsPen.py":541 - * r83 = x1 * y1 - * r84 = r41 + r82 + 378 * r83 - * r85 = x2 * x3 # <<<<<<<<<<<<<< - * r86 = r85 * y1 - * r87 = r27 * x3 - */ - __pyx_v_r85 = (__pyx_v_x2 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":542 - * r84 = r41 + r82 + 378 * r83 - * r85 = x2 * x3 - * r86 = r85 * y1 # <<<<<<<<<<<<<< - * r87 = r27 * x3 - * r88 = 27 * r9 - */ - __pyx_v_r86 = (__pyx_v_r85 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":543 - * r85 = x2 * x3 - * r86 = r85 * y1 - * r87 = r27 * x3 # <<<<<<<<<<<<<< - * r88 = 27 * r9 - * r89 = r88 * y2 - */ - __pyx_v_r87 = (__pyx_v_r27 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":544 - * r86 = r85 * y1 - * r87 = r27 * x3 - * r88 = 27 * r9 # <<<<<<<<<<<<<< - * r89 = r88 * y2 - * r90 = 42 * r14 - */ - __pyx_v_r88 = (27.0 * __pyx_v_r9); - - /* "fontTools/pens/momentsPen.py":545 - * r87 = r27 * x3 - * r88 = 27 * r9 - * r89 = r88 * y2 # <<<<<<<<<<<<<< - * r90 = 42 * r14 - * r91 = 90 * x1 - */ - __pyx_v_r89 = (__pyx_v_r88 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":546 - * r88 = 27 * r9 - * r89 = r88 * y2 - * r90 = 42 * r14 # <<<<<<<<<<<<<< - * r91 = 90 * x1 - * r92 = 189 * r18 - */ - __pyx_v_r90 = (42.0 * __pyx_v_r14); - - /* "fontTools/pens/momentsPen.py":547 - * r89 = r88 * y2 - * r90 = 42 * r14 - * r91 = 90 * x1 # <<<<<<<<<<<<<< - * r92 = 189 * r18 - * r93 = 378 * r18 - */ - __pyx_v_r91 = (90.0 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":548 - * r90 = 42 * r14 - * r91 = 90 * x1 - * r92 = 189 * r18 # <<<<<<<<<<<<<< - * r93 = 378 * r18 - * r94 = r12 * y1 - */ - __pyx_v_r92 = (189.0 * __pyx_v_r18); - - /* "fontTools/pens/momentsPen.py":549 - * r91 = 90 * x1 - * r92 = 189 * r18 - * r93 = 378 * r18 # <<<<<<<<<<<<<< - * r94 = r12 * y1 - * r95 = 252 * x1 * x2 - */ - __pyx_v_r93 = (378.0 * __pyx_v_r18); - - /* "fontTools/pens/momentsPen.py":550 - * r92 = 189 * r18 - * r93 = 378 * r18 - * r94 = r12 * y1 # <<<<<<<<<<<<<< - * r95 = 252 * x1 * x2 - * r96 = r79 * x3 - */ - __pyx_v_r94 = (__pyx_v_r12 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":551 - * r93 = 378 * r18 - * r94 = r12 * y1 - * r95 = 252 * x1 * x2 # <<<<<<<<<<<<<< - * r96 = r79 * x3 - * r97 = 30 * r85 - */ - __pyx_v_r95 = ((252.0 * __pyx_v_x1) * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":552 - * r94 = r12 * y1 - * r95 = 252 * x1 * x2 - * r96 = r79 * x3 # <<<<<<<<<<<<<< - * r97 = 30 * r85 - * r98 = r83 * x3 - */ - __pyx_v_r96 = (__pyx_v_r79 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":553 - * r95 = 252 * x1 * x2 - * r96 = r79 * x3 - * r97 = 30 * r85 # <<<<<<<<<<<<<< - * r98 = r83 * x3 - * r99 = 30 * x3 - */ - __pyx_v_r97 = (30.0 * __pyx_v_r85); - - /* "fontTools/pens/momentsPen.py":554 - * r96 = r79 * x3 - * r97 = 30 * r85 - * r98 = r83 * x3 # <<<<<<<<<<<<<< - * r99 = 30 * x3 - * r100 = 42 * x3 - */ - __pyx_v_r98 = (__pyx_v_r83 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":555 - * r97 = 30 * r85 - * r98 = r83 * x3 - * r99 = 30 * x3 # <<<<<<<<<<<<<< - * r100 = 42 * x3 - * r101 = r42 * x1 - */ - __pyx_v_r99 = (30.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":556 - * r98 = r83 * x3 - * r99 = 30 * x3 - * r100 = 42 * x3 # <<<<<<<<<<<<<< - * r101 = r42 * x1 - * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 - */ - __pyx_v_r100 = (42.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":557 - * r99 = 30 * x3 - * r100 = 42 * x3 - * r101 = r42 * x1 # <<<<<<<<<<<<<< - * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 - * r103 = 378 * r48 - */ - __pyx_v_r101 = (__pyx_v_r42 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":558 - * r100 = 42 * x3 - * r101 = r42 * x1 - * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 # <<<<<<<<<<<<<< - * r103 = 378 * r48 - * r104 = 18 * y1 - */ - __pyx_v_r102 = ((((__pyx_v_r10 * __pyx_v_y2) + (14.0 * __pyx_v_r14)) + ((126.0 * __pyx_v_r18) * __pyx_v_y1)) + (__pyx_v_r81 * __pyx_v_r99)); - - /* "fontTools/pens/momentsPen.py":559 - * r101 = r42 * x1 - * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 - * r103 = 378 * r48 # <<<<<<<<<<<<<< - * r104 = 18 * y1 - * r105 = r104 * y2 - */ - __pyx_v_r103 = (378.0 * __pyx_v_r48); - - /* "fontTools/pens/momentsPen.py":560 - * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 - * r103 = 378 * r48 - * r104 = 18 * y1 # <<<<<<<<<<<<<< - * r105 = r104 * y2 - * r106 = y0 * y1 - */ - __pyx_v_r104 = (18.0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":561 - * r103 = 378 * r48 - * r104 = 18 * y1 - * r105 = r104 * y2 # <<<<<<<<<<<<<< - * r106 = y0 * y1 - * r107 = 252 * y2 - */ - __pyx_v_r105 = (__pyx_v_r104 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":562 - * r104 = 18 * y1 - * r105 = r104 * y2 - * r106 = y0 * y1 # <<<<<<<<<<<<<< - * r107 = 252 * y2 - * r108 = r107 * y0 - */ - __pyx_v_r106 = (__pyx_v_y0 * __pyx_v_y1); - - /* "fontTools/pens/momentsPen.py":563 - * r105 = r104 * y2 - * r106 = y0 * y1 - * r107 = 252 * y2 # <<<<<<<<<<<<<< - * r108 = r107 * y0 - * r109 = y0 * y3 - */ - __pyx_v_r107 = (252.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":564 - * r106 = y0 * y1 - * r107 = 252 * y2 - * r108 = r107 * y0 # <<<<<<<<<<<<<< - * r109 = y0 * y3 - * r110 = 42 * r64 - */ - __pyx_v_r108 = (__pyx_v_r107 * __pyx_v_y0); - - /* "fontTools/pens/momentsPen.py":565 - * r107 = 252 * y2 - * r108 = r107 * y0 - * r109 = y0 * y3 # <<<<<<<<<<<<<< - * r110 = 42 * r64 - * r111 = 378 * r53 - */ - __pyx_v_r109 = (__pyx_v_y0 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":566 - * r108 = r107 * y0 - * r109 = y0 * y3 - * r110 = 42 * r64 # <<<<<<<<<<<<<< - * r111 = 378 * r53 - * r112 = 63 * r48 - */ - __pyx_v_r110 = (42.0 * __pyx_v_r64); - - /* "fontTools/pens/momentsPen.py":567 - * r109 = y0 * y3 - * r110 = 42 * r64 - * r111 = 378 * r53 # <<<<<<<<<<<<<< - * r112 = 63 * r48 - * r113 = 27 * x2 - */ - __pyx_v_r111 = (378.0 * __pyx_v_r53); - - /* "fontTools/pens/momentsPen.py":568 - * r110 = 42 * r64 - * r111 = 378 * r53 - * r112 = 63 * r48 # <<<<<<<<<<<<<< - * r113 = 27 * x2 - * r114 = r27 * y2 - */ - __pyx_v_r112 = (63.0 * __pyx_v_r48); - - /* "fontTools/pens/momentsPen.py":569 - * r111 = 378 * r53 - * r112 = 63 * r48 - * r113 = 27 * x2 # <<<<<<<<<<<<<< - * r114 = r27 * y2 - * r115 = r113 * r48 + 42 * r52 - */ - __pyx_v_r113 = (27.0 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":570 - * r112 = 63 * r48 - * r113 = 27 * x2 - * r114 = r27 * y2 # <<<<<<<<<<<<<< - * r115 = r113 * r48 + 42 * r52 - * r116 = x3 * y3 - */ - __pyx_v_r114 = (__pyx_v_r27 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":571 - * r113 = 27 * x2 - * r114 = r27 * y2 - * r115 = r113 * r48 + 42 * r52 # <<<<<<<<<<<<<< - * r116 = x3 * y3 - * r117 = 54 * r42 - */ - __pyx_v_r115 = ((__pyx_v_r113 * __pyx_v_r48) + (42.0 * __pyx_v_r52)); - - /* "fontTools/pens/momentsPen.py":572 - * r114 = r27 * y2 - * r115 = r113 * r48 + 42 * r52 - * r116 = x3 * y3 # <<<<<<<<<<<<<< - * r117 = 54 * r42 - * r118 = r51 * x1 - */ - __pyx_v_r116 = (__pyx_v_x3 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":573 - * r115 = r113 * r48 + 42 * r52 - * r116 = x3 * y3 - * r117 = 54 * r42 # <<<<<<<<<<<<<< - * r118 = r51 * x1 - * r119 = r51 * x2 - */ - __pyx_v_r117 = (54.0 * __pyx_v_r42); - - /* "fontTools/pens/momentsPen.py":574 - * r116 = x3 * y3 - * r117 = 54 * r42 - * r118 = r51 * x1 # <<<<<<<<<<<<<< - * r119 = r51 * x2 - * r120 = r48 * x1 - */ - __pyx_v_r118 = (__pyx_v_r51 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":575 - * r117 = 54 * r42 - * r118 = r51 * x1 - * r119 = r51 * x2 # <<<<<<<<<<<<<< - * r120 = r48 * x1 - * r121 = 21 * x3 - */ - __pyx_v_r119 = (__pyx_v_r51 * __pyx_v_x2); - - /* "fontTools/pens/momentsPen.py":576 - * r118 = r51 * x1 - * r119 = r51 * x2 - * r120 = r48 * x1 # <<<<<<<<<<<<<< - * r121 = 21 * x3 - * r122 = r64 * x1 - */ - __pyx_v_r120 = (__pyx_v_r48 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":577 - * r119 = r51 * x2 - * r120 = r48 * x1 - * r121 = 21 * x3 # <<<<<<<<<<<<<< - * r122 = r64 * x1 - * r123 = r81 * y3 - */ - __pyx_v_r121 = (21.0 * __pyx_v_x3); - - /* "fontTools/pens/momentsPen.py":578 - * r120 = r48 * x1 - * r121 = 21 * x3 - * r122 = r64 * x1 # <<<<<<<<<<<<<< - * r123 = r81 * y3 - * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 - */ - __pyx_v_r122 = (__pyx_v_r64 * __pyx_v_x1); - - /* "fontTools/pens/momentsPen.py":579 - * r121 = 21 * x3 - * r122 = r64 * x1 - * r123 = r81 * y3 # <<<<<<<<<<<<<< - * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 - * r125 = y2**3 - */ - __pyx_v_r123 = (__pyx_v_r81 * __pyx_v_y3); - - /* "fontTools/pens/momentsPen.py":580 - * r122 = r64 * x1 - * r123 = r81 * y3 - * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 # <<<<<<<<<<<<<< - * r125 = y2**3 - * r126 = y3**3 - */ - __pyx_v_r124 = (((((30.0 * __pyx_v_r27) * __pyx_v_y1) + (__pyx_v_r49 * __pyx_v_x2)) + (14.0 * __pyx_v_r52)) + ((126.0 * __pyx_v_r53) * __pyx_v_x1)); - - /* "fontTools/pens/momentsPen.py":581 - * r123 = r81 * y3 - * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 - * r125 = y2**3 # <<<<<<<<<<<<<< - * r126 = y3**3 - * r127 = y1**3 - */ - __pyx_v_r125 = pow(__pyx_v_y2, 3.0); - - /* "fontTools/pens/momentsPen.py":582 - * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 - * r125 = y2**3 - * r126 = y3**3 # <<<<<<<<<<<<<< - * r127 = y1**3 - * r128 = y0**3 - */ - __pyx_v_r126 = pow(__pyx_v_y3, 3.0); - - /* "fontTools/pens/momentsPen.py":583 - * r125 = y2**3 - * r126 = y3**3 - * r127 = y1**3 # <<<<<<<<<<<<<< - * r128 = y0**3 - * r129 = r51 * y2 - */ - __pyx_v_r127 = pow(__pyx_v_y1, 3.0); - - /* "fontTools/pens/momentsPen.py":584 - * r126 = y3**3 - * r127 = y1**3 - * r128 = y0**3 # <<<<<<<<<<<<<< - * r129 = r51 * y2 - * r130 = r112 * y3 + r21 * r51 - */ - __pyx_v_r128 = pow(__pyx_v_y0, 3.0); - - /* "fontTools/pens/momentsPen.py":585 - * r127 = y1**3 - * r128 = y0**3 - * r129 = r51 * y2 # <<<<<<<<<<<<<< - * r130 = r112 * y3 + r21 * r51 - * r131 = 189 * r53 - */ - __pyx_v_r129 = (__pyx_v_r51 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":586 - * r128 = y0**3 - * r129 = r51 * y2 - * r130 = r112 * y3 + r21 * r51 # <<<<<<<<<<<<<< - * r131 = 189 * r53 - * r132 = 90 * y2 - */ - __pyx_v_r130 = ((__pyx_v_r112 * __pyx_v_y3) + (__pyx_v_r21 * __pyx_v_r51)); - - /* "fontTools/pens/momentsPen.py":587 - * r129 = r51 * y2 - * r130 = r112 * y3 + r21 * r51 - * r131 = 189 * r53 # <<<<<<<<<<<<<< - * r132 = 90 * y2 - * - */ - __pyx_v_r131 = (189.0 * __pyx_v_r53); - - /* "fontTools/pens/momentsPen.py":588 - * r130 = r112 * y3 + r21 * r51 - * r131 = 189 * r53 - * r132 = 90 * y2 # <<<<<<<<<<<<<< - * - * self.area += ( - */ - __pyx_v_r132 = (90.0 * __pyx_v_y2); - - /* "fontTools/pens/momentsPen.py":590 - * r132 = 90 * y2 - * - * self.area += ( # <<<<<<<<<<<<<< - * -r1 / 20 - * - r3 / 20 - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_area); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 590, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":597 - * + 3 * x1 * (y2 + y3) / 20 - * + 3 * x2 * y3 / 10 - * - y0 * (r5 + r6 + x3) / 20 # <<<<<<<<<<<<<< - * ) - * self.momentX += ( - */ - __pyx_t_1 = PyFloat_FromDouble(((((((((-__pyx_v_r1) / 20.0) - (__pyx_v_r3 / 20.0)) - ((__pyx_v_r4 * (__pyx_v_x2 + __pyx_v_x3)) / 20.0)) + ((__pyx_v_x0 * (((__pyx_v_r7 + __pyx_v_r8) + (10.0 * __pyx_v_y0)) + __pyx_v_y3)) / 20.0)) + (((3.0 * __pyx_v_x1) * (__pyx_v_y2 + __pyx_v_y3)) / 20.0)) + (((3.0 * __pyx_v_x2) * __pyx_v_y3) / 10.0)) - ((__pyx_v_y0 * ((__pyx_v_r5 + __pyx_v_r6) + __pyx_v_x3)) / 20.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 597, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":590 - * r132 = 90 * y2 - * - * self.area += ( # <<<<<<<<<<<<<< - * -r1 / 20 - * - r3 / 20 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 590, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_t_2) < 0) __PYX_ERR(0, 590, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":599 - * - y0 * (r5 + r6 + x3) / 20 - * ) - * self.momentX += ( # <<<<<<<<<<<<<< - * r11 / 840 - * - r13 / 8 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 599, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":621 - * ) - * / 840 - * - y0 * (r17 + r30 * x2 + r31 * x1 + r32 + r33 + 18 * r9) / 840 # <<<<<<<<<<<<<< - * ) - * self.momentY += ( - */ - __pyx_t_1 = PyFloat_FromDouble(((((((((((__pyx_v_r11 / 840.0) - (__pyx_v_r13 / 8.0)) - (__pyx_v_r14 / 3.0)) - ((__pyx_v_r17 * ((-__pyx_v_r15) + __pyx_v_r8)) / 840.0)) + ((__pyx_v_r19 * (__pyx_v_r8 + (2.0 * __pyx_v_y3))) / 840.0)) + ((__pyx_v_r20 * (((__pyx_v_r0 + __pyx_v_r21) + (56.0 * __pyx_v_y0)) + __pyx_v_y3)) / 168.0)) + ((__pyx_v_r29 * (((-__pyx_v_r23) + __pyx_v_r25) + __pyx_v_r28)) / 840.0)) - ((__pyx_v_r4 * (((10.0 * __pyx_v_r12) + __pyx_v_r17) + __pyx_v_r22)) / 840.0)) + ((__pyx_v_x0 * (((((((((12.0 * __pyx_v_r27) + (__pyx_v_r30 * __pyx_v_y2)) + __pyx_v_r34) - (__pyx_v_r35 * __pyx_v_x1)) - __pyx_v_r37) - (__pyx_v_r38 * __pyx_v_y0)) + (__pyx_v_r39 * __pyx_v_x1)) - (__pyx_v_r4 * __pyx_v_x3)) + __pyx_v_r45)) / 840.0)) - ((__pyx_v_y0 * (((((__pyx_v_r17 + (__pyx_v_r30 * __pyx_v_x2)) + (__pyx_v_r31 * __pyx_v_x1)) + __pyx_v_r32) + __pyx_v_r33) + (18.0 * __pyx_v_r9))) / 840.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 621, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":599 - * - y0 * (r5 + r6 + x3) / 20 - * ) - * self.momentX += ( # <<<<<<<<<<<<<< - * r11 / 840 - * - r13 / 8 - */ - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 599, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_t_3) < 0) __PYX_ERR(0, 599, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":623 - * - y0 * (r17 + r30 * x2 + r31 * x1 + r32 + r33 + 18 * r9) / 840 - * ) - * self.momentY += ( # <<<<<<<<<<<<<< - * -r4 * (r25 + r58) / 840 - * - r47 / 8 - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 623, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":646 - * + x1 * (r24 * y1 + 10 * r51 + r59 + r60 + r7 * y3) / 280 - * + x2 * y3 * (r15 + r8) / 56 - * - y0 * (r16 * y1 + r31 * y2 + r44 * x2 + r45 + r61 - r62 * x1) / 840 # <<<<<<<<<<<<<< - * ) - * self.momentXX += ( - */ - __pyx_t_1 = PyFloat_FromDouble(((((((((((((-__pyx_v_r4) * (__pyx_v_r25 + __pyx_v_r58)) / 840.0) - (__pyx_v_r47 / 8.0)) - (__pyx_v_r50 / 840.0)) - (__pyx_v_r52 / 6.0)) - ((__pyx_v_r54 * (__pyx_v_r6 + (2.0 * __pyx_v_x3))) / 840.0)) - ((__pyx_v_r55 * ((__pyx_v_r56 + __pyx_v_r57) + __pyx_v_x3)) / 168.0)) + ((__pyx_v_x0 * ((((((((((__pyx_v_r35 * __pyx_v_y1) + (__pyx_v_r40 * __pyx_v_y0)) + (__pyx_v_r44 * __pyx_v_y2)) + (18.0 * __pyx_v_r48)) + (140.0 * __pyx_v_r55)) + __pyx_v_r59) + __pyx_v_r63) + (12.0 * __pyx_v_r64)) + __pyx_v_r65) + __pyx_v_r66)) / 840.0)) + ((__pyx_v_x1 * (((((__pyx_v_r24 * __pyx_v_y1) + (10.0 * __pyx_v_r51)) + __pyx_v_r59) + __pyx_v_r60) + (__pyx_v_r7 * __pyx_v_y3))) / 280.0)) + (((__pyx_v_x2 * __pyx_v_y3) * (__pyx_v_r15 + __pyx_v_r8)) / 56.0)) - ((__pyx_v_y0 * ((((((__pyx_v_r16 * __pyx_v_y1) + (__pyx_v_r31 * __pyx_v_y2)) + (__pyx_v_r44 * __pyx_v_x2)) + __pyx_v_r45) + __pyx_v_r61) - (__pyx_v_r62 * __pyx_v_x1))) / 840.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 646, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":623 - * - y0 * (r17 + r30 * x2 + r31 * x1 + r32 + r33 + 18 * r9) / 840 - * ) - * self.momentY += ( # <<<<<<<<<<<<<< - * -r4 * (r25 + r58) / 840 - * - r47 / 8 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 623, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_t_2) < 0) __PYX_ERR(0, 623, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":648 - * - y0 * (r16 * y1 + r31 * y2 + r44 * x2 + r45 + r61 - r62 * x1) / 840 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * -r12 * r72 * (-r40 + r8) / 9240 - * + 3 * r18 * (r28 + r34 - r38 * y1 + r75) / 3080 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 648, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":706 - * ) - * / 9240 - * - y0 # <<<<<<<<<<<<<< - * * ( - * r12 * r56 - */ - __pyx_t_1 = PyFloat_FromDouble(((((((((((((((((-__pyx_v_r12) * __pyx_v_r72) * ((-__pyx_v_r40) + __pyx_v_r8)) / 9240.0) + (((3.0 * __pyx_v_r18) * (((__pyx_v_r28 + __pyx_v_r34) - (__pyx_v_r38 * __pyx_v_y1)) + __pyx_v_r75)) / 3080.0)) + ((__pyx_v_r20 * (((((((((__pyx_v_r24 * __pyx_v_x3) - (__pyx_v_r72 * __pyx_v_y0)) - (__pyx_v_r76 * __pyx_v_y0)) - (__pyx_v_r77 * __pyx_v_y0)) + __pyx_v_r78) + (__pyx_v_r79 * __pyx_v_y3)) + (__pyx_v_r80 * __pyx_v_y1)) + (210.0 * __pyx_v_r81)) + __pyx_v_r84)) / 9240.0)) - ((__pyx_v_r29 * ((((((((__pyx_v_r12 * __pyx_v_r21) + (14.0 * __pyx_v_r13)) + (__pyx_v_r44 * __pyx_v_r9)) - (__pyx_v_r73 * __pyx_v_y3)) + (54.0 * __pyx_v_r86)) - (84.0 * __pyx_v_r87)) - __pyx_v_r89) - __pyx_v_r90)) / 9240.0)) - ((__pyx_v_r4 * (((((70.0 * __pyx_v_r12) * __pyx_v_x2) + (27.0 * __pyx_v_r67)) + (42.0 * __pyx_v_r68)) + __pyx_v_r74)) / 9240.0)) + (((3.0 * __pyx_v_r67) * __pyx_v_y3) / 220.0)) - ((__pyx_v_r68 * __pyx_v_r69) / 9240.0)) - ((__pyx_v_r68 * __pyx_v_y3) / 4.0)) - (((__pyx_v_r70 * __pyx_v_r9) * ((-__pyx_v_r62) + __pyx_v_y2)) / 9240.0)) + (((3.0 * __pyx_v_r71) * (__pyx_v_r24 + __pyx_v_r40)) / 3080.0)) + ((pow(__pyx_v_x0, 3.0) * (((__pyx_v_r24 + __pyx_v_r44) + (165.0 * __pyx_v_y0)) + __pyx_v_y3)) / 660.0)) + ((__pyx_v_x0 * (((((((((((((((((((__pyx_v_r100 * __pyx_v_r27) + (162.0 * __pyx_v_r101)) + __pyx_v_r102) + __pyx_v_r11) + ((63.0 * __pyx_v_r18) * __pyx_v_y3)) + (__pyx_v_r27 * __pyx_v_r91)) - (__pyx_v_r33 * __pyx_v_y0)) - (__pyx_v_r37 * __pyx_v_x3)) + (__pyx_v_r43 * __pyx_v_x3)) - (__pyx_v_r73 * __pyx_v_y0)) - (__pyx_v_r88 * __pyx_v_y1)) + (__pyx_v_r92 * __pyx_v_y2)) - (__pyx_v_r93 * __pyx_v_y0)) - (9.0 * __pyx_v_r94)) - (__pyx_v_r95 * __pyx_v_y0)) - (__pyx_v_r96 * __pyx_v_y0)) - (__pyx_v_r97 * __pyx_v_y1)) - (18.0 * __pyx_v_r98)) + ((__pyx_v_r99 * __pyx_v_x1) * __pyx_v_y3))) / 9240.0)) - ((__pyx_v_y0 * ((((((((((__pyx_v_r12 * __pyx_v_r56) + (__pyx_v_r12 * __pyx_v_r80)) + (__pyx_v_r32 * __pyx_v_x3)) + (45.0 * __pyx_v_r67)) + (14.0 * __pyx_v_r68)) + (126.0 * __pyx_v_r71)) + __pyx_v_r74) + (__pyx_v_r85 * __pyx_v_r91)) + ((135.0 * __pyx_v_r9) * __pyx_v_x1)) + (__pyx_v_r92 * __pyx_v_x2))) / 9240.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 706, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":648 - * - y0 * (r16 * y1 + r31 * y2 + r44 * x2 + r45 + r61 - r62 * x1) / 840 - * ) - * self.momentXX += ( # <<<<<<<<<<<<<< - * -r12 * r72 * (-r40 + r8) / 9240 - * + 3 * r18 * (r28 + r34 - r38 * y1 + r75) / 3080 - */ - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 648, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_t_3) < 0) __PYX_ERR(0, 648, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":721 - * / 9240 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * -r103 * r12 / 18480 - * - r12 * r51 / 8 - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 721, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "fontTools/pens/momentsPen.py":783 - * ) - * / 3080 - * - y0 # <<<<<<<<<<<<<< - * * ( - * 54 * r101 - */ - __pyx_t_1 = PyFloat_FromDouble((((((((((((((((-__pyx_v_r103) * __pyx_v_r12) / 18480.0) - ((__pyx_v_r12 * __pyx_v_r51) / 8.0)) - (((3.0 * __pyx_v_r14) * __pyx_v_y2) / 44.0)) + (((3.0 * __pyx_v_r18) * ((((__pyx_v_r105 + (__pyx_v_r2 * __pyx_v_y1)) + (18.0 * __pyx_v_r46)) + (15.0 * __pyx_v_r48)) + (7.0 * __pyx_v_r51))) / 6160.0)) + ((__pyx_v_r20 * ((((((((((1260.0 * __pyx_v_r106) + (__pyx_v_r107 * __pyx_v_y1)) + __pyx_v_r108) + (28.0 * __pyx_v_r109)) + __pyx_v_r110) + __pyx_v_r111) + __pyx_v_r112) + (30.0 * __pyx_v_r46)) + (2310.0 * __pyx_v_r55)) + __pyx_v_r66)) / 18480.0)) - ((__pyx_v_r54 * (((7.0 * __pyx_v_r12) + (18.0 * __pyx_v_r85)) + (15.0 * __pyx_v_r9))) / 18480.0)) - ((__pyx_v_r55 * (((((__pyx_v_r33 + __pyx_v_r73) + __pyx_v_r93) + __pyx_v_r95) + __pyx_v_r96) + __pyx_v_r97)) / 18480.0)) - ((__pyx_v_r7 * (((((42.0 * __pyx_v_r13) + (__pyx_v_r82 * __pyx_v_x3)) + (28.0 * __pyx_v_r87)) + __pyx_v_r89) + __pyx_v_r90)) / 18480.0)) - (((3.0 * __pyx_v_r85) * (__pyx_v_r48 - __pyx_v_r66)) / 220.0)) + ((((3.0 * __pyx_v_r9) * __pyx_v_y3) * (__pyx_v_r62 + (2.0 * __pyx_v_y2))) / 440.0)) + ((__pyx_v_x0 * (((((((((((((((((((((((-__pyx_v_r1) * __pyx_v_y0) - ((84.0 * __pyx_v_r106) * __pyx_v_x2)) + (__pyx_v_r109 * __pyx_v_r56)) + (54.0 * __pyx_v_r114)) + (__pyx_v_r117 * __pyx_v_y1)) + (15.0 * __pyx_v_r118)) + (21.0 * __pyx_v_r119)) + (81.0 * __pyx_v_r120)) + (__pyx_v_r121 * __pyx_v_r46)) + (54.0 * __pyx_v_r122)) + (60.0 * __pyx_v_r123)) + __pyx_v_r124) - ((__pyx_v_r21 * __pyx_v_x3) * __pyx_v_y0)) + (__pyx_v_r23 * __pyx_v_y3)) - (__pyx_v_r54 * __pyx_v_x3)) - (__pyx_v_r55 * __pyx_v_r72)) - (__pyx_v_r55 * __pyx_v_r76)) - (__pyx_v_r55 * __pyx_v_r77)) + ((__pyx_v_r57 * __pyx_v_y0) * __pyx_v_y3)) + (__pyx_v_r60 * __pyx_v_x3)) + ((84.0 * __pyx_v_r81) * __pyx_v_y0)) + ((189.0 * __pyx_v_r81) * __pyx_v_y1))) / 9240.0)) + ((__pyx_v_x1 * ((((((((__pyx_v_r104 * __pyx_v_r27) - (__pyx_v_r105 * __pyx_v_x3)) - (__pyx_v_r113 * __pyx_v_r53)) + (63.0 * __pyx_v_r114)) + __pyx_v_r115) - (__pyx_v_r16 * __pyx_v_r53)) + (28.0 * __pyx_v_r47)) + (__pyx_v_r51 * __pyx_v_r80))) / 3080.0)) - ((__pyx_v_y0 * (((((((((((((54.0 * __pyx_v_r101) + __pyx_v_r102) + (__pyx_v_r116 * __pyx_v_r5)) + (__pyx_v_r117 * __pyx_v_x3)) + (21.0 * __pyx_v_r13)) - (__pyx_v_r19 * __pyx_v_y3)) + (__pyx_v_r22 * __pyx_v_y3)) + (__pyx_v_r78 * __pyx_v_x3)) + ((189.0 * __pyx_v_r83) * __pyx_v_x2)) + (60.0 * __pyx_v_r86)) + ((81.0 * __pyx_v_r9) * __pyx_v_y1)) + (15.0 * __pyx_v_r94)) + (54.0 * __pyx_v_r98))) / 9240.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 783, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":721 - * / 9240 - * ) - * self.momentXY += ( # <<<<<<<<<<<<<< - * -r103 * r12 / 18480 - * - r12 * r51 / 8 - */ - __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 721, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_t_2) < 0) __PYX_ERR(0, 721, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":801 - * / 9240 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r103 * r116 / 9240 - * - r125 * r70 / 9240 - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentYY); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 801, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":849 - * / 3080 - * + 3 * x2 * y3 * (r48 + r66 + r8 * y3) / 220 - * - y0 # <<<<<<<<<<<<<< - * * ( - * r100 * r46 - */ - __pyx_t_1 = PyFloat_FromDouble((((((((((((((((-__pyx_v_r103) * __pyx_v_r116) / 9240.0) - ((__pyx_v_r125 * __pyx_v_r70) / 9240.0)) - ((__pyx_v_r126 * __pyx_v_x3) / 12.0)) - (((3.0 * __pyx_v_r127) * (__pyx_v_r26 + __pyx_v_r38)) / 3080.0)) - ((__pyx_v_r128 * ((__pyx_v_r26 + __pyx_v_r30) + __pyx_v_x3)) / 660.0)) - ((__pyx_v_r4 * ((((__pyx_v_r112 * __pyx_v_x3) + __pyx_v_r115) - (14.0 * __pyx_v_r119)) + (84.0 * __pyx_v_r47))) / 9240.0)) - ((__pyx_v_r52 * __pyx_v_r69) / 9240.0)) - ((__pyx_v_r54 * ((__pyx_v_r58 + __pyx_v_r61) + __pyx_v_r75)) / 9240.0)) - ((__pyx_v_r55 * ((((((__pyx_v_r100 * __pyx_v_y1) + (__pyx_v_r121 * __pyx_v_y2)) + (__pyx_v_r26 * __pyx_v_y3)) + (__pyx_v_r79 * __pyx_v_y2)) + __pyx_v_r84) + ((210.0 * __pyx_v_x2) * __pyx_v_y1))) / 9240.0)) + ((__pyx_v_x0 * (((((((((((((((((((__pyx_v_r108 * __pyx_v_y1) + (__pyx_v_r110 * __pyx_v_y0)) + (__pyx_v_r111 * __pyx_v_y0)) + (__pyx_v_r112 * __pyx_v_y0)) + (45.0 * __pyx_v_r125)) + (14.0 * __pyx_v_r126)) + (126.0 * __pyx_v_r127)) + (770.0 * __pyx_v_r128)) + (42.0 * __pyx_v_r129)) + __pyx_v_r130) + (__pyx_v_r131 * __pyx_v_y2)) + (__pyx_v_r132 * __pyx_v_r64)) + ((135.0 * __pyx_v_r48) * __pyx_v_y1)) + ((630.0 * __pyx_v_r55) * __pyx_v_y1)) + ((126.0 * __pyx_v_r55) * __pyx_v_y2)) + ((14.0 * __pyx_v_r55) * __pyx_v_y3)) + (__pyx_v_r63 * __pyx_v_y3)) + (__pyx_v_r65 * __pyx_v_y3)) + (__pyx_v_r66 * __pyx_v_y0))) / 9240.0)) + ((__pyx_v_x1 * ((((((((27.0 * __pyx_v_r125) + (42.0 * __pyx_v_r126)) + (70.0 * __pyx_v_r129)) + __pyx_v_r130) + (__pyx_v_r39 * __pyx_v_r53)) + (__pyx_v_r44 * __pyx_v_r48)) + ((27.0 * __pyx_v_r53) * __pyx_v_y2)) + ((54.0 * __pyx_v_r64) * __pyx_v_y2))) / 3080.0)) + ((((3.0 * __pyx_v_x2) * __pyx_v_y3) * ((__pyx_v_r48 + __pyx_v_r66) + (__pyx_v_r8 * __pyx_v_y3))) / 220.0)) - ((__pyx_v_y0 * (((((((((((((__pyx_v_r100 * __pyx_v_r46) + (18.0 * __pyx_v_r114)) - (9.0 * __pyx_v_r118)) - (27.0 * __pyx_v_r120)) - (18.0 * __pyx_v_r122)) - (30.0 * __pyx_v_r123)) + __pyx_v_r124) + (__pyx_v_r131 * __pyx_v_x2)) + ((__pyx_v_r132 * __pyx_v_x3) * __pyx_v_y1)) + ((162.0 * __pyx_v_r42) * __pyx_v_y1)) + __pyx_v_r50) + ((63.0 * __pyx_v_r53) * __pyx_v_x3)) + (__pyx_v_r64 * __pyx_v_r99))) / 9240.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 849, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - - /* "fontTools/pens/momentsPen.py":801 - * / 9240 - * ) - * self.momentYY += ( # <<<<<<<<<<<<<< - * -r103 * r116 / 9240 - * - r125 * r70 / 9240 - */ - __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 801, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_t_3) < 0) __PYX_ERR(0, 801, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "fontTools/pens/momentsPen.py":450 - * @cython.locals(x2=cython.double, y2=cython.double) - * @cython.locals(x3=cython.double, y3=cython.double) - * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._curveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyMethodDef __pyx_methods[] = { - {0, 0, 0, 0} -}; - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_momentsPen(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_momentsPen}, - {0, NULL} -}; -#endif - -static struct PyModuleDef __pyx_moduledef = { - PyModuleDef_HEAD_INIT, - "momentsPen", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ -}; -#endif -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_n_s_AttributeError, __pyx_k_AttributeError, sizeof(__pyx_k_AttributeError), 0, 0, 1, 1}, - {&__pyx_n_s_BasePen, __pyx_k_BasePen, sizeof(__pyx_k_BasePen), 0, 0, 1, 1}, - {&__pyx_n_s_COMPILED, __pyx_k_COMPILED, sizeof(__pyx_k_COMPILED), 0, 0, 1, 1}, - {&__pyx_kp_u_Green_theorem_is_not_defined_on, __pyx_k_Green_theorem_is_not_defined_on, sizeof(__pyx_k_Green_theorem_is_not_defined_on), 0, 1, 0, 0}, - {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1}, - {&__pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_k_Lib_fontTools_pens_momentsPen_py, sizeof(__pyx_k_Lib_fontTools_pens_momentsPen_py), 0, 0, 1, 0}, - {&__pyx_n_s_MomentsPen, __pyx_k_MomentsPen, sizeof(__pyx_k_MomentsPen), 0, 0, 1, 1}, - {&__pyx_n_u_MomentsPen, __pyx_k_MomentsPen, sizeof(__pyx_k_MomentsPen), 0, 1, 0, 1}, - {&__pyx_n_s_MomentsPen___init, __pyx_k_MomentsPen___init, sizeof(__pyx_k_MomentsPen___init), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__closePath, __pyx_k_MomentsPen__closePath, sizeof(__pyx_k_MomentsPen__closePath), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__curveToOne, __pyx_k_MomentsPen__curveToOne, sizeof(__pyx_k_MomentsPen__curveToOne), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__endPath, __pyx_k_MomentsPen__endPath, sizeof(__pyx_k_MomentsPen__endPath), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__lineTo, __pyx_k_MomentsPen__lineTo, sizeof(__pyx_k_MomentsPen__lineTo), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__moveTo, __pyx_k_MomentsPen__moveTo, sizeof(__pyx_k_MomentsPen__moveTo), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__qCurveToOne, __pyx_k_MomentsPen__qCurveToOne, sizeof(__pyx_k_MomentsPen__qCurveToOne), 0, 0, 1, 1}, - {&__pyx_n_s_MomentsPen__startPoint, __pyx_k_MomentsPen__startPoint, sizeof(__pyx_k_MomentsPen__startPoint), 0, 0, 1, 1}, - {&__pyx_n_s_OpenContourError, __pyx_k_OpenContourError, sizeof(__pyx_k_OpenContourError), 0, 0, 1, 1}, - {&__pyx_n_s_all, __pyx_k_all, sizeof(__pyx_k_all), 0, 0, 1, 1}, - {&__pyx_n_s_area, __pyx_k_area, sizeof(__pyx_k_area), 0, 0, 1, 1}, - {&__pyx_n_u_area, __pyx_k_area, sizeof(__pyx_k_area), 0, 1, 0, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_n_s_closePath, __pyx_k_closePath, sizeof(__pyx_k_closePath), 0, 0, 1, 1}, - {&__pyx_n_s_curveToOne, __pyx_k_curveToOne, sizeof(__pyx_k_curveToOne), 0, 0, 1, 1}, - {&__pyx_n_s_cython, __pyx_k_cython, sizeof(__pyx_k_cython), 0, 0, 1, 1}, - {&__pyx_n_s_doc, __pyx_k_doc, sizeof(__pyx_k_doc), 0, 0, 1, 1}, - {&__pyx_n_s_endPath, __pyx_k_endPath, sizeof(__pyx_k_endPath), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_misc, __pyx_k_fontTools_misc, sizeof(__pyx_k_fontTools_misc), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_misc_symfont, __pyx_k_fontTools_misc_symfont, sizeof(__pyx_k_fontTools_misc_symfont), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_pens_basePen, __pyx_k_fontTools_pens_basePen, sizeof(__pyx_k_fontTools_pens_basePen), 0, 0, 1, 1}, - {&__pyx_n_s_fontTools_pens_momentsPen, __pyx_k_fontTools_pens_momentsPen, sizeof(__pyx_k_fontTools_pens_momentsPen), 0, 0, 1, 1}, - {&__pyx_n_s_getCurrentPoint, __pyx_k_getCurrentPoint, sizeof(__pyx_k_getCurrentPoint), 0, 0, 1, 1}, - {&__pyx_n_s_glyphset, __pyx_k_glyphset, sizeof(__pyx_k_glyphset), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_init, __pyx_k_init, sizeof(__pyx_k_init), 0, 0, 1, 1}, - {&__pyx_n_s_lineTo, __pyx_k_lineTo, sizeof(__pyx_k_lineTo), 0, 0, 1, 1}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_u_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 1, 0, 1}, - {&__pyx_n_s_metaclass, __pyx_k_metaclass, sizeof(__pyx_k_metaclass), 0, 0, 1, 1}, - {&__pyx_n_s_module, __pyx_k_module, sizeof(__pyx_k_module), 0, 0, 1, 1}, - {&__pyx_n_s_momentX, __pyx_k_momentX, sizeof(__pyx_k_momentX), 0, 0, 1, 1}, - {&__pyx_n_u_momentX, __pyx_k_momentX, sizeof(__pyx_k_momentX), 0, 1, 0, 1}, - {&__pyx_n_s_momentXX, __pyx_k_momentXX, sizeof(__pyx_k_momentXX), 0, 0, 1, 1}, - {&__pyx_n_u_momentXX, __pyx_k_momentXX, sizeof(__pyx_k_momentXX), 0, 1, 0, 1}, - {&__pyx_n_s_momentXY, __pyx_k_momentXY, sizeof(__pyx_k_momentXY), 0, 0, 1, 1}, - {&__pyx_n_u_momentXY, __pyx_k_momentXY, sizeof(__pyx_k_momentXY), 0, 1, 0, 1}, - {&__pyx_n_s_momentY, __pyx_k_momentY, sizeof(__pyx_k_momentY), 0, 0, 1, 1}, - {&__pyx_n_u_momentY, __pyx_k_momentY, sizeof(__pyx_k_momentY), 0, 1, 0, 1}, - {&__pyx_n_s_momentYY, __pyx_k_momentYY, sizeof(__pyx_k_momentYY), 0, 0, 1, 1}, - {&__pyx_n_u_momentYY, __pyx_k_momentYY, sizeof(__pyx_k_momentYY), 0, 1, 0, 1}, - {&__pyx_n_s_moveTo, __pyx_k_moveTo, sizeof(__pyx_k_moveTo), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_p0, __pyx_k_p0, sizeof(__pyx_k_p0), 0, 0, 1, 1}, - {&__pyx_n_s_p1, __pyx_k_p1, sizeof(__pyx_k_p1), 0, 0, 1, 1}, - {&__pyx_n_s_p2, __pyx_k_p2, sizeof(__pyx_k_p2), 0, 0, 1, 1}, - {&__pyx_n_s_p3, __pyx_k_p3, sizeof(__pyx_k_p3), 0, 0, 1, 1}, - {&__pyx_n_s_prepare, __pyx_k_prepare, sizeof(__pyx_k_prepare), 0, 0, 1, 1}, - {&__pyx_n_s_printGreenPen, __pyx_k_printGreenPen, sizeof(__pyx_k_printGreenPen), 0, 0, 1, 1}, - {&__pyx_n_s_qCurveToOne, __pyx_k_qCurveToOne, sizeof(__pyx_k_qCurveToOne), 0, 0, 1, 1}, - {&__pyx_n_s_qualname, __pyx_k_qualname, sizeof(__pyx_k_qualname), 0, 0, 1, 1}, - {&__pyx_n_s_r0, __pyx_k_r0, sizeof(__pyx_k_r0), 0, 0, 1, 1}, - {&__pyx_n_s_r1, __pyx_k_r1, sizeof(__pyx_k_r1), 0, 0, 1, 1}, - {&__pyx_n_s_r10, __pyx_k_r10, sizeof(__pyx_k_r10), 0, 0, 1, 1}, - {&__pyx_n_s_r100, __pyx_k_r100, sizeof(__pyx_k_r100), 0, 0, 1, 1}, - {&__pyx_n_s_r101, __pyx_k_r101, sizeof(__pyx_k_r101), 0, 0, 1, 1}, - {&__pyx_n_s_r102, __pyx_k_r102, sizeof(__pyx_k_r102), 0, 0, 1, 1}, - {&__pyx_n_s_r103, __pyx_k_r103, sizeof(__pyx_k_r103), 0, 0, 1, 1}, - {&__pyx_n_s_r104, __pyx_k_r104, sizeof(__pyx_k_r104), 0, 0, 1, 1}, - {&__pyx_n_s_r105, __pyx_k_r105, sizeof(__pyx_k_r105), 0, 0, 1, 1}, - {&__pyx_n_s_r106, __pyx_k_r106, sizeof(__pyx_k_r106), 0, 0, 1, 1}, - {&__pyx_n_s_r107, __pyx_k_r107, sizeof(__pyx_k_r107), 0, 0, 1, 1}, - {&__pyx_n_s_r108, __pyx_k_r108, sizeof(__pyx_k_r108), 0, 0, 1, 1}, - {&__pyx_n_s_r109, __pyx_k_r109, sizeof(__pyx_k_r109), 0, 0, 1, 1}, - {&__pyx_n_s_r11, __pyx_k_r11, sizeof(__pyx_k_r11), 0, 0, 1, 1}, - {&__pyx_n_s_r110, __pyx_k_r110, sizeof(__pyx_k_r110), 0, 0, 1, 1}, - {&__pyx_n_s_r111, __pyx_k_r111, sizeof(__pyx_k_r111), 0, 0, 1, 1}, - {&__pyx_n_s_r112, __pyx_k_r112, sizeof(__pyx_k_r112), 0, 0, 1, 1}, - {&__pyx_n_s_r113, __pyx_k_r113, sizeof(__pyx_k_r113), 0, 0, 1, 1}, - {&__pyx_n_s_r114, __pyx_k_r114, sizeof(__pyx_k_r114), 0, 0, 1, 1}, - {&__pyx_n_s_r115, __pyx_k_r115, sizeof(__pyx_k_r115), 0, 0, 1, 1}, - {&__pyx_n_s_r116, __pyx_k_r116, sizeof(__pyx_k_r116), 0, 0, 1, 1}, - {&__pyx_n_s_r117, __pyx_k_r117, sizeof(__pyx_k_r117), 0, 0, 1, 1}, - {&__pyx_n_s_r118, __pyx_k_r118, sizeof(__pyx_k_r118), 0, 0, 1, 1}, - {&__pyx_n_s_r119, __pyx_k_r119, sizeof(__pyx_k_r119), 0, 0, 1, 1}, - {&__pyx_n_s_r12, __pyx_k_r12, sizeof(__pyx_k_r12), 0, 0, 1, 1}, - {&__pyx_n_s_r120, __pyx_k_r120, sizeof(__pyx_k_r120), 0, 0, 1, 1}, - {&__pyx_n_s_r121, __pyx_k_r121, sizeof(__pyx_k_r121), 0, 0, 1, 1}, - {&__pyx_n_s_r122, __pyx_k_r122, sizeof(__pyx_k_r122), 0, 0, 1, 1}, - {&__pyx_n_s_r123, __pyx_k_r123, sizeof(__pyx_k_r123), 0, 0, 1, 1}, - {&__pyx_n_s_r124, __pyx_k_r124, sizeof(__pyx_k_r124), 0, 0, 1, 1}, - {&__pyx_n_s_r125, __pyx_k_r125, sizeof(__pyx_k_r125), 0, 0, 1, 1}, - {&__pyx_n_s_r126, __pyx_k_r126, sizeof(__pyx_k_r126), 0, 0, 1, 1}, - {&__pyx_n_s_r127, __pyx_k_r127, sizeof(__pyx_k_r127), 0, 0, 1, 1}, - {&__pyx_n_s_r128, __pyx_k_r128, sizeof(__pyx_k_r128), 0, 0, 1, 1}, - {&__pyx_n_s_r129, __pyx_k_r129, sizeof(__pyx_k_r129), 0, 0, 1, 1}, - {&__pyx_n_s_r13, __pyx_k_r13, sizeof(__pyx_k_r13), 0, 0, 1, 1}, - {&__pyx_n_s_r130, __pyx_k_r130, sizeof(__pyx_k_r130), 0, 0, 1, 1}, - {&__pyx_n_s_r131, __pyx_k_r131, sizeof(__pyx_k_r131), 0, 0, 1, 1}, - {&__pyx_n_s_r132, __pyx_k_r132, sizeof(__pyx_k_r132), 0, 0, 1, 1}, - {&__pyx_n_s_r14, __pyx_k_r14, sizeof(__pyx_k_r14), 0, 0, 1, 1}, - {&__pyx_n_s_r15, __pyx_k_r15, sizeof(__pyx_k_r15), 0, 0, 1, 1}, - {&__pyx_n_s_r16, __pyx_k_r16, sizeof(__pyx_k_r16), 0, 0, 1, 1}, - {&__pyx_n_s_r17, __pyx_k_r17, sizeof(__pyx_k_r17), 0, 0, 1, 1}, - {&__pyx_n_s_r18, __pyx_k_r18, sizeof(__pyx_k_r18), 0, 0, 1, 1}, - {&__pyx_n_s_r19, __pyx_k_r19, sizeof(__pyx_k_r19), 0, 0, 1, 1}, - {&__pyx_n_s_r2, __pyx_k_r2, sizeof(__pyx_k_r2), 0, 0, 1, 1}, - {&__pyx_n_s_r20, __pyx_k_r20, sizeof(__pyx_k_r20), 0, 0, 1, 1}, - {&__pyx_n_s_r21, __pyx_k_r21, sizeof(__pyx_k_r21), 0, 0, 1, 1}, - {&__pyx_n_s_r22, __pyx_k_r22, sizeof(__pyx_k_r22), 0, 0, 1, 1}, - {&__pyx_n_s_r23, __pyx_k_r23, sizeof(__pyx_k_r23), 0, 0, 1, 1}, - {&__pyx_n_s_r24, __pyx_k_r24, sizeof(__pyx_k_r24), 0, 0, 1, 1}, - {&__pyx_n_s_r25, __pyx_k_r25, sizeof(__pyx_k_r25), 0, 0, 1, 1}, - {&__pyx_n_s_r26, __pyx_k_r26, sizeof(__pyx_k_r26), 0, 0, 1, 1}, - {&__pyx_n_s_r27, __pyx_k_r27, sizeof(__pyx_k_r27), 0, 0, 1, 1}, - {&__pyx_n_s_r28, __pyx_k_r28, sizeof(__pyx_k_r28), 0, 0, 1, 1}, - {&__pyx_n_s_r29, __pyx_k_r29, sizeof(__pyx_k_r29), 0, 0, 1, 1}, - {&__pyx_n_s_r3, __pyx_k_r3, sizeof(__pyx_k_r3), 0, 0, 1, 1}, - {&__pyx_n_s_r30, __pyx_k_r30, sizeof(__pyx_k_r30), 0, 0, 1, 1}, - {&__pyx_n_s_r31, __pyx_k_r31, sizeof(__pyx_k_r31), 0, 0, 1, 1}, - {&__pyx_n_s_r32, __pyx_k_r32, sizeof(__pyx_k_r32), 0, 0, 1, 1}, - {&__pyx_n_s_r33, __pyx_k_r33, sizeof(__pyx_k_r33), 0, 0, 1, 1}, - {&__pyx_n_s_r34, __pyx_k_r34, sizeof(__pyx_k_r34), 0, 0, 1, 1}, - {&__pyx_n_s_r35, __pyx_k_r35, sizeof(__pyx_k_r35), 0, 0, 1, 1}, - {&__pyx_n_s_r36, __pyx_k_r36, sizeof(__pyx_k_r36), 0, 0, 1, 1}, - {&__pyx_n_s_r37, __pyx_k_r37, sizeof(__pyx_k_r37), 0, 0, 1, 1}, - {&__pyx_n_s_r38, __pyx_k_r38, sizeof(__pyx_k_r38), 0, 0, 1, 1}, - {&__pyx_n_s_r39, __pyx_k_r39, sizeof(__pyx_k_r39), 0, 0, 1, 1}, - {&__pyx_n_s_r4, __pyx_k_r4, sizeof(__pyx_k_r4), 0, 0, 1, 1}, - {&__pyx_n_s_r40, __pyx_k_r40, sizeof(__pyx_k_r40), 0, 0, 1, 1}, - {&__pyx_n_s_r41, __pyx_k_r41, sizeof(__pyx_k_r41), 0, 0, 1, 1}, - {&__pyx_n_s_r42, __pyx_k_r42, sizeof(__pyx_k_r42), 0, 0, 1, 1}, - {&__pyx_n_s_r43, __pyx_k_r43, sizeof(__pyx_k_r43), 0, 0, 1, 1}, - {&__pyx_n_s_r44, __pyx_k_r44, sizeof(__pyx_k_r44), 0, 0, 1, 1}, - {&__pyx_n_s_r45, __pyx_k_r45, sizeof(__pyx_k_r45), 0, 0, 1, 1}, - {&__pyx_n_s_r46, __pyx_k_r46, sizeof(__pyx_k_r46), 0, 0, 1, 1}, - {&__pyx_n_s_r47, __pyx_k_r47, sizeof(__pyx_k_r47), 0, 0, 1, 1}, - {&__pyx_n_s_r48, __pyx_k_r48, sizeof(__pyx_k_r48), 0, 0, 1, 1}, - {&__pyx_n_s_r49, __pyx_k_r49, sizeof(__pyx_k_r49), 0, 0, 1, 1}, - {&__pyx_n_s_r5, __pyx_k_r5, sizeof(__pyx_k_r5), 0, 0, 1, 1}, - {&__pyx_n_s_r50, __pyx_k_r50, sizeof(__pyx_k_r50), 0, 0, 1, 1}, - {&__pyx_n_s_r51, __pyx_k_r51, sizeof(__pyx_k_r51), 0, 0, 1, 1}, - {&__pyx_n_s_r52, __pyx_k_r52, sizeof(__pyx_k_r52), 0, 0, 1, 1}, - {&__pyx_n_s_r53, __pyx_k_r53, sizeof(__pyx_k_r53), 0, 0, 1, 1}, - {&__pyx_n_s_r54, __pyx_k_r54, sizeof(__pyx_k_r54), 0, 0, 1, 1}, - {&__pyx_n_s_r55, __pyx_k_r55, sizeof(__pyx_k_r55), 0, 0, 1, 1}, - {&__pyx_n_s_r56, __pyx_k_r56, sizeof(__pyx_k_r56), 0, 0, 1, 1}, - {&__pyx_n_s_r57, __pyx_k_r57, sizeof(__pyx_k_r57), 0, 0, 1, 1}, - {&__pyx_n_s_r58, __pyx_k_r58, sizeof(__pyx_k_r58), 0, 0, 1, 1}, - {&__pyx_n_s_r59, __pyx_k_r59, sizeof(__pyx_k_r59), 0, 0, 1, 1}, - {&__pyx_n_s_r6, __pyx_k_r6, sizeof(__pyx_k_r6), 0, 0, 1, 1}, - {&__pyx_n_s_r60, __pyx_k_r60, sizeof(__pyx_k_r60), 0, 0, 1, 1}, - {&__pyx_n_s_r61, __pyx_k_r61, sizeof(__pyx_k_r61), 0, 0, 1, 1}, - {&__pyx_n_s_r62, __pyx_k_r62, sizeof(__pyx_k_r62), 0, 0, 1, 1}, - {&__pyx_n_s_r63, __pyx_k_r63, sizeof(__pyx_k_r63), 0, 0, 1, 1}, - {&__pyx_n_s_r64, __pyx_k_r64, sizeof(__pyx_k_r64), 0, 0, 1, 1}, - {&__pyx_n_s_r65, __pyx_k_r65, sizeof(__pyx_k_r65), 0, 0, 1, 1}, - {&__pyx_n_s_r66, __pyx_k_r66, sizeof(__pyx_k_r66), 0, 0, 1, 1}, - {&__pyx_n_s_r67, __pyx_k_r67, sizeof(__pyx_k_r67), 0, 0, 1, 1}, - {&__pyx_n_s_r68, __pyx_k_r68, sizeof(__pyx_k_r68), 0, 0, 1, 1}, - {&__pyx_n_s_r69, __pyx_k_r69, sizeof(__pyx_k_r69), 0, 0, 1, 1}, - {&__pyx_n_s_r7, __pyx_k_r7, sizeof(__pyx_k_r7), 0, 0, 1, 1}, - {&__pyx_n_s_r70, __pyx_k_r70, sizeof(__pyx_k_r70), 0, 0, 1, 1}, - {&__pyx_n_s_r71, __pyx_k_r71, sizeof(__pyx_k_r71), 0, 0, 1, 1}, - {&__pyx_n_s_r72, __pyx_k_r72, sizeof(__pyx_k_r72), 0, 0, 1, 1}, - {&__pyx_n_s_r73, __pyx_k_r73, sizeof(__pyx_k_r73), 0, 0, 1, 1}, - {&__pyx_n_s_r74, __pyx_k_r74, sizeof(__pyx_k_r74), 0, 0, 1, 1}, - {&__pyx_n_s_r75, __pyx_k_r75, sizeof(__pyx_k_r75), 0, 0, 1, 1}, - {&__pyx_n_s_r76, __pyx_k_r76, sizeof(__pyx_k_r76), 0, 0, 1, 1}, - {&__pyx_n_s_r77, __pyx_k_r77, sizeof(__pyx_k_r77), 0, 0, 1, 1}, - {&__pyx_n_s_r78, __pyx_k_r78, sizeof(__pyx_k_r78), 0, 0, 1, 1}, - {&__pyx_n_s_r79, __pyx_k_r79, sizeof(__pyx_k_r79), 0, 0, 1, 1}, - {&__pyx_n_s_r8, __pyx_k_r8, sizeof(__pyx_k_r8), 0, 0, 1, 1}, - {&__pyx_n_s_r80, __pyx_k_r80, sizeof(__pyx_k_r80), 0, 0, 1, 1}, - {&__pyx_n_s_r81, __pyx_k_r81, sizeof(__pyx_k_r81), 0, 0, 1, 1}, - {&__pyx_n_s_r82, __pyx_k_r82, sizeof(__pyx_k_r82), 0, 0, 1, 1}, - {&__pyx_n_s_r83, __pyx_k_r83, sizeof(__pyx_k_r83), 0, 0, 1, 1}, - {&__pyx_n_s_r84, __pyx_k_r84, sizeof(__pyx_k_r84), 0, 0, 1, 1}, - {&__pyx_n_s_r85, __pyx_k_r85, sizeof(__pyx_k_r85), 0, 0, 1, 1}, - {&__pyx_n_s_r86, __pyx_k_r86, sizeof(__pyx_k_r86), 0, 0, 1, 1}, - {&__pyx_n_s_r87, __pyx_k_r87, sizeof(__pyx_k_r87), 0, 0, 1, 1}, - {&__pyx_n_s_r88, __pyx_k_r88, sizeof(__pyx_k_r88), 0, 0, 1, 1}, - {&__pyx_n_s_r89, __pyx_k_r89, sizeof(__pyx_k_r89), 0, 0, 1, 1}, - {&__pyx_n_s_r9, __pyx_k_r9, sizeof(__pyx_k_r9), 0, 0, 1, 1}, - {&__pyx_n_s_r90, __pyx_k_r90, sizeof(__pyx_k_r90), 0, 0, 1, 1}, - {&__pyx_n_s_r91, __pyx_k_r91, sizeof(__pyx_k_r91), 0, 0, 1, 1}, - {&__pyx_n_s_r92, __pyx_k_r92, sizeof(__pyx_k_r92), 0, 0, 1, 1}, - {&__pyx_n_s_r93, __pyx_k_r93, sizeof(__pyx_k_r93), 0, 0, 1, 1}, - {&__pyx_n_s_r94, __pyx_k_r94, sizeof(__pyx_k_r94), 0, 0, 1, 1}, - {&__pyx_n_s_r95, __pyx_k_r95, sizeof(__pyx_k_r95), 0, 0, 1, 1}, - {&__pyx_n_s_r96, __pyx_k_r96, sizeof(__pyx_k_r96), 0, 0, 1, 1}, - {&__pyx_n_s_r97, __pyx_k_r97, sizeof(__pyx_k_r97), 0, 0, 1, 1}, - {&__pyx_n_s_r98, __pyx_k_r98, sizeof(__pyx_k_r98), 0, 0, 1, 1}, - {&__pyx_n_s_r99, __pyx_k_r99, sizeof(__pyx_k_r99), 0, 0, 1, 1}, - {&__pyx_n_s_self, __pyx_k_self, sizeof(__pyx_k_self), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_n_s_x, __pyx_k_x, sizeof(__pyx_k_x), 0, 0, 1, 1}, - {&__pyx_n_s_x0, __pyx_k_x0, sizeof(__pyx_k_x0), 0, 0, 1, 1}, - {&__pyx_n_s_x1, __pyx_k_x1, sizeof(__pyx_k_x1), 0, 0, 1, 1}, - {&__pyx_n_s_x2, __pyx_k_x2, sizeof(__pyx_k_x2), 0, 0, 1, 1}, - {&__pyx_n_s_x3, __pyx_k_x3, sizeof(__pyx_k_x3), 0, 0, 1, 1}, - {&__pyx_n_s_y, __pyx_k_y, sizeof(__pyx_k_y), 0, 0, 1, 1}, - {&__pyx_n_s_y0, __pyx_k_y0, sizeof(__pyx_k_y0), 0, 0, 1, 1}, - {&__pyx_n_s_y1, __pyx_k_y1, sizeof(__pyx_k_y1), 0, 0, 1, 1}, - {&__pyx_n_s_y2, __pyx_k_y2, sizeof(__pyx_k_y2), 0, 0, 1, 1}, - {&__pyx_n_s_y3, __pyx_k_y3, sizeof(__pyx_k_y3), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} -}; -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_AttributeError = __Pyx_GetBuiltinName(__pyx_n_s_AttributeError); if (!__pyx_builtin_AttributeError) __PYX_ERR(0, 7, __pyx_L1_error) - __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(0, 7, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "fontTools/pens/momentsPen.py":18 - * - * class MomentsPen(BasePen): - * def __init__(self, glyphset=None): # <<<<<<<<<<<<<< - * BasePen.__init__(self, glyphset) - * - */ - __pyx_tuple_ = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_glyphset); if (unlikely(!__pyx_tuple_)) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple_); - __Pyx_GIVEREF(__pyx_tuple_); - __pyx_codeobj__2 = (PyObject*)__Pyx_PyCode_New(2, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple_, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_init, 18, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__2)) __PYX_ERR(0, 18, __pyx_L1_error) - __pyx_tuple__3 = PyTuple_Pack(1, ((PyObject *)Py_None)); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__3); - __Pyx_GIVEREF(__pyx_tuple__3); - - /* "fontTools/pens/momentsPen.py":28 - * self.momentYY = 0 - * - * def _moveTo(self, p0): # <<<<<<<<<<<<<< - * self.__startPoint = p0 - * - */ - __pyx_tuple__4 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_p0); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(0, 28, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_GIVEREF(__pyx_tuple__4); - __pyx_codeobj__5 = (PyObject*)__Pyx_PyCode_New(2, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__4, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_moveTo, 28, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__5)) __PYX_ERR(0, 28, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":31 - * self.__startPoint = p0 - * - * def _closePath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - __pyx_tuple__6 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_p0); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(0, 31, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__6); - __Pyx_GIVEREF(__pyx_tuple__6); - __pyx_codeobj__7 = (PyObject*)__Pyx_PyCode_New(1, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__6, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_closePath, 31, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__7)) __PYX_ERR(0, 31, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":36 - * self._lineTo(self.__startPoint) - * - * def _endPath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - __pyx_tuple__8 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_p0); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__8); - __Pyx_GIVEREF(__pyx_tuple__8); - __pyx_codeobj__9 = (PyObject*)__Pyx_PyCode_New(1, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__8, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_endPath, 36, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__9)) __PYX_ERR(0, 36, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":57 - * @cython.locals(x0=cython.double, y0=cython.double) - * @cython.locals(x1=cython.double, y1=cython.double) - * def _lineTo(self, p1): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_tuple__10 = PyTuple_Pack(19, __pyx_n_s_self, __pyx_n_s_p1, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x0, __pyx_n_s_y0, __pyx_n_s_r12, __pyx_n_s_r11, __pyx_n_s_r10, __pyx_n_s_r9, __pyx_n_s_r8, __pyx_n_s_r7, __pyx_n_s_r6, __pyx_n_s_r5, __pyx_n_s_r4, __pyx_n_s_r3, __pyx_n_s_r2, __pyx_n_s_r1, __pyx_n_s_r0); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__10); - __Pyx_GIVEREF(__pyx_tuple__10); - __pyx_codeobj__11 = (PyObject*)__Pyx_PyCode_New(2, 0, 19, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__10, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_lineTo, 57, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__11)) __PYX_ERR(0, 57, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":159 - * @cython.locals(x1=cython.double, y1=cython.double) - * @cython.locals(x2=cython.double, y2=cython.double) - * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_tuple__12 = PyTuple_Pack(63, __pyx_n_s_self, __pyx_n_s_p1, __pyx_n_s_p2, __pyx_n_s_x2, __pyx_n_s_y2, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x0, __pyx_n_s_y0, __pyx_n_s_r53, __pyx_n_s_r52, __pyx_n_s_r51, __pyx_n_s_r50, __pyx_n_s_r49, __pyx_n_s_r48, __pyx_n_s_r47, __pyx_n_s_r46, __pyx_n_s_r45, __pyx_n_s_r44, __pyx_n_s_r43, __pyx_n_s_r42, __pyx_n_s_r41, __pyx_n_s_r40, __pyx_n_s_r39, __pyx_n_s_r38, __pyx_n_s_r37, __pyx_n_s_r36, __pyx_n_s_r35, __pyx_n_s_r34, __pyx_n_s_r33, __pyx_n_s_r32, __pyx_n_s_r31, __pyx_n_s_r30, __pyx_n_s_r29, __pyx_n_s_r28, __pyx_n_s_r27, __pyx_n_s_r26, __pyx_n_s_r25, __pyx_n_s_r24, __pyx_n_s_r23, __pyx_n_s_r22, __pyx_n_s_r21, __pyx_n_s_r20, __pyx_n_s_r19, __pyx_n_s_r18, __pyx_n_s_r17, __pyx_n_s_r16, __pyx_n_s_r15, __pyx_n_s_r14, __pyx_n_s_r13, __pyx_n_s_r12, __pyx_n_s_r11, __pyx_n_s_r10, __pyx_n_s_r9, __pyx_n_s_r8, __pyx_n_s_r7, __pyx_n_s_r6, __pyx_n_s_r5, __pyx_n_s_r4, __pyx_n_s_r3, __pyx_n_s_r2, __pyx_n_s_r1, __pyx_n_s_r0); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__12); - __Pyx_GIVEREF(__pyx_tuple__12); - __pyx_codeobj__13 = (PyObject*)__Pyx_PyCode_New(3, 0, 63, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__12, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_qCurveToOne, 159, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__13)) __PYX_ERR(0, 159, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":450 - * @cython.locals(x2=cython.double, y2=cython.double) - * @cython.locals(x3=cython.double, y3=cython.double) - * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_tuple__14 = PyTuple_Pack(145, __pyx_n_s_self, __pyx_n_s_p1, __pyx_n_s_p2, __pyx_n_s_p3, __pyx_n_s_x3, __pyx_n_s_y3, __pyx_n_s_x2, __pyx_n_s_y2, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x0, __pyx_n_s_y0, __pyx_n_s_r132, __pyx_n_s_r131, __pyx_n_s_r130, __pyx_n_s_r129, __pyx_n_s_r128, __pyx_n_s_r127, __pyx_n_s_r126, __pyx_n_s_r125, __pyx_n_s_r124, __pyx_n_s_r123, __pyx_n_s_r122, __pyx_n_s_r121, __pyx_n_s_r120, __pyx_n_s_r119, __pyx_n_s_r118, __pyx_n_s_r117, __pyx_n_s_r116, __pyx_n_s_r115, __pyx_n_s_r114, __pyx_n_s_r113, __pyx_n_s_r112, __pyx_n_s_r111, __pyx_n_s_r110, __pyx_n_s_r109, __pyx_n_s_r108, __pyx_n_s_r107, __pyx_n_s_r106, __pyx_n_s_r105, __pyx_n_s_r104, __pyx_n_s_r103, __pyx_n_s_r102, __pyx_n_s_r101, __pyx_n_s_r100, __pyx_n_s_r99, __pyx_n_s_r98, __pyx_n_s_r97, __pyx_n_s_r96, __pyx_n_s_r95, __pyx_n_s_r94, __pyx_n_s_r93, __pyx_n_s_r92, __pyx_n_s_r91, __pyx_n_s_r90, __pyx_n_s_r89, __pyx_n_s_r88, __pyx_n_s_r87, __pyx_n_s_r86, __pyx_n_s_r85, __pyx_n_s_r84, __pyx_n_s_r83, __pyx_n_s_r82, __pyx_n_s_r81, __pyx_n_s_r80, __pyx_n_s_r79, __pyx_n_s_r78, __pyx_n_s_r77, __pyx_n_s_r76, __pyx_n_s_r75, __pyx_n_s_r74, __pyx_n_s_r73, __pyx_n_s_r72, __pyx_n_s_r71, __pyx_n_s_r70, __pyx_n_s_r69, __pyx_n_s_r68, __pyx_n_s_r67, __pyx_n_s_r66, __pyx_n_s_r65, __pyx_n_s_r64, __pyx_n_s_r63, __pyx_n_s_r62, __pyx_n_s_r61, __pyx_n_s_r60, __pyx_n_s_r59, __pyx_n_s_r58, __pyx_n_s_r57, __pyx_n_s_r56, __pyx_n_s_r55, __pyx_n_s_r54, __pyx_n_s_r53, __pyx_n_s_r52, __pyx_n_s_r51, __pyx_n_s_r50, __pyx_n_s_r49, __pyx_n_s_r48, __pyx_n_s_r47, __pyx_n_s_r46, __pyx_n_s_r45, __pyx_n_s_r44, __pyx_n_s_r43, __pyx_n_s_r42, __pyx_n_s_r41, __pyx_n_s_r40, __pyx_n_s_r39, __pyx_n_s_r38, __pyx_n_s_r37, __pyx_n_s_r36, __pyx_n_s_r35, __pyx_n_s_r34, __pyx_n_s_r33, __pyx_n_s_r32, __pyx_n_s_r31, __pyx_n_s_r30, __pyx_n_s_r29, __pyx_n_s_r28, __pyx_n_s_r27, __pyx_n_s_r26, __pyx_n_s_r25, __pyx_n_s_r24, __pyx_n_s_r23, __pyx_n_s_r22, __pyx_n_s_r21, __pyx_n_s_r20, __pyx_n_s_r19, __pyx_n_s_r18, __pyx_n_s_r17, __pyx_n_s_r16, __pyx_n_s_r15, __pyx_n_s_r14, __pyx_n_s_r13, __pyx_n_s_r12, __pyx_n_s_r11, __pyx_n_s_r10, __pyx_n_s_r9, __pyx_n_s_r8, __pyx_n_s_r7, __pyx_n_s_r6, __pyx_n_s_r5, __pyx_n_s_r4, __pyx_n_s_r3, __pyx_n_s_r2, __pyx_n_s_r1, __pyx_n_s_r0); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(0, 450, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - __pyx_codeobj__15 = (PyObject*)__Pyx_PyCode_New(4, 0, 145, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__14, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_curveToOne, 450, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__15)) __PYX_ERR(0, 450, __pyx_L1_error) - - /* "fontTools/pens/momentsPen.py":875 - * "MomentsPen", - * [ - * ("area", 1), # <<<<<<<<<<<<<< - * ("momentX", x), - * ("momentY", y), - */ - __pyx_tuple__16 = PyTuple_Pack(2, __pyx_n_u_area, __pyx_int_1); if (unlikely(!__pyx_tuple__16)) __PYX_ERR(0, 875, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__16); - __Pyx_GIVEREF(__pyx_tuple__16); - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_2 = PyInt_FromLong(2); if (unlikely(!__pyx_int_2)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initmomentsPen(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initmomentsPen(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_momentsPen(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_momentsPen(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) { - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { - result = PyDict_SetItemString(moddict, to_name, value); - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_momentsPen(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'momentsPen' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_momentsPen(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - PyEval_InitThreads(); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("momentsPen", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - #endif - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_fontTools__pens__momentsPen) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "fontTools.pens.momentsPen")) { - if (unlikely(PyDict_SetItemString(modules, "fontTools.pens.momentsPen", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - (void)__Pyx_modinit_type_init_code(); - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "fontTools/pens/momentsPen.py":1 - * from fontTools.pens.basePen import BasePen, OpenContourError # <<<<<<<<<<<<<< - * - * try: - */ - __pyx_t_1 = PyList_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_n_s_BasePen); - __Pyx_GIVEREF(__pyx_n_s_BasePen); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_BasePen); - __Pyx_INCREF(__pyx_n_s_OpenContourError); - __Pyx_GIVEREF(__pyx_n_s_OpenContourError); - PyList_SET_ITEM(__pyx_t_1, 1, __pyx_n_s_OpenContourError); - __pyx_t_2 = __Pyx_Import(__pyx_n_s_fontTools_pens_basePen, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_BasePen); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_BasePen, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_OpenContourError); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_OpenContourError, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":3 - * from fontTools.pens.basePen import BasePen, OpenContourError - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - /*try:*/ { - - /* "fontTools/pens/momentsPen.py":6 - * import cython - * - * COMPILED = cython.compiled # <<<<<<<<<<<<<< - * except (AttributeError, ImportError): - * # if cython not installed, use mock module with no-op decorators and types - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_COMPILED, Py_True) < 0) __PYX_ERR(0, 6, __pyx_L2_error) - - /* "fontTools/pens/momentsPen.py":3 - * from fontTools.pens.basePen import BasePen, OpenContourError - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L7_try_end; - __pyx_L2_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/pens/momentsPen.py":7 - * - * COMPILED = cython.compiled - * except (AttributeError, ImportError): # <<<<<<<<<<<<<< - * # if cython not installed, use mock module with no-op decorators and types - * from fontTools.misc import cython - */ - __pyx_t_6 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_AttributeError) || __Pyx_PyErr_ExceptionMatches(__pyx_builtin_ImportError); - if (__pyx_t_6) { - __Pyx_AddTraceback("fontTools.pens.momentsPen", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_2, &__pyx_t_1, &__pyx_t_7) < 0) __PYX_ERR(0, 7, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_t_7); - - /* "fontTools/pens/momentsPen.py":9 - * except (AttributeError, ImportError): - * # if cython not installed, use mock module with no-op decorators and types - * from fontTools.misc import cython # <<<<<<<<<<<<<< - * - * COMPILED = False - */ - __pyx_t_8 = PyList_New(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 9, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_n_s_cython); - __Pyx_GIVEREF(__pyx_n_s_cython); - PyList_SET_ITEM(__pyx_t_8, 0, __pyx_n_s_cython); - __pyx_t_9 = __Pyx_Import(__pyx_n_s_fontTools_misc, __pyx_t_8, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 9, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_ImportFrom(__pyx_t_9, __pyx_n_s_cython); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 9, __pyx_L4_except_error) - __Pyx_GOTREF(__pyx_t_8); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_cython, __pyx_t_8) < 0) __PYX_ERR(0, 9, __pyx_L4_except_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":11 - * from fontTools.misc import cython - * - * COMPILED = False # <<<<<<<<<<<<<< - * - * - */ - if (PyDict_SetItem(__pyx_d, __pyx_n_s_COMPILED, Py_False) < 0) __PYX_ERR(0, 11, __pyx_L4_except_error) - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - goto __pyx_L3_exception_handled; - } - goto __pyx_L4_except_error; - __pyx_L4_except_error:; - - /* "fontTools/pens/momentsPen.py":3 - * from fontTools.pens.basePen import BasePen, OpenContourError - * - * try: # <<<<<<<<<<<<<< - * import cython - * - */ - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L1_error; - __pyx_L3_exception_handled:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - __pyx_L7_try_end:; - } - - /* "fontTools/pens/momentsPen.py":14 - * - * - * __all__ = ["MomentsPen"] # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_7 = PyList_New(1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_INCREF(__pyx_n_u_MomentsPen); - __Pyx_GIVEREF(__pyx_n_u_MomentsPen); - PyList_SET_ITEM(__pyx_t_7, 0, __pyx_n_u_MomentsPen); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_all, __pyx_t_7) < 0) __PYX_ERR(0, 14, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/pens/momentsPen.py":17 - * - * - * class MomentsPen(BasePen): # <<<<<<<<<<<<<< - * def __init__(self, glyphset=None): - * BasePen.__init__(self, glyphset) - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_BasePen); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_7); - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_CalculateMetaclass(NULL, __pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_2 = __Pyx_Py3MetaclassPrepare(__pyx_t_7, __pyx_t_1, __pyx_n_s_MomentsPen, __pyx_n_s_MomentsPen, (PyObject *) NULL, __pyx_n_s_fontTools_pens_momentsPen, (PyObject *) NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/pens/momentsPen.py":18 - * - * class MomentsPen(BasePen): - * def __init__(self, glyphset=None): # <<<<<<<<<<<<<< - * BasePen.__init__(self, glyphset) - * - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__, 0, __pyx_n_s_MomentsPen___init, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__2)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_9, __pyx_tuple__3); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_init, __pyx_t_9) < 0) __PYX_ERR(0, 18, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":28 - * self.momentYY = 0 - * - * def _moveTo(self, p0): # <<<<<<<<<<<<<< - * self.__startPoint = p0 - * - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo, 0, __pyx_n_s_MomentsPen__moveTo, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__5)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 28, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_moveTo, __pyx_t_9) < 0) __PYX_ERR(0, 28, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":31 - * self.__startPoint = p0 - * - * def _closePath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath, 0, __pyx_n_s_MomentsPen__closePath, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__7)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 31, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_closePath, __pyx_t_9) < 0) __PYX_ERR(0, 31, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":36 - * self._lineTo(self.__startPoint) - * - * def _endPath(self): # <<<<<<<<<<<<<< - * p0 = self._getCurrentPoint() - * if p0 != self.__startPoint: - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath, 0, __pyx_n_s_MomentsPen__endPath, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__9)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_endPath, __pyx_t_9) < 0) __PYX_ERR(0, 36, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":57 - * @cython.locals(x0=cython.double, y0=cython.double) - * @cython.locals(x1=cython.double, y1=cython.double) - * def _lineTo(self, p1): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo, 0, __pyx_n_s_MomentsPen__lineTo, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__11)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_lineTo, __pyx_t_9) < 0) __PYX_ERR(0, 57, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":159 - * @cython.locals(x1=cython.double, y1=cython.double) - * @cython.locals(x2=cython.double, y2=cython.double) - * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne, 0, __pyx_n_s_MomentsPen__qCurveToOne, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__13)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_qCurveToOne, __pyx_t_9) < 0) __PYX_ERR(0, 159, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":450 - * @cython.locals(x2=cython.double, y2=cython.double) - * @cython.locals(x3=cython.double, y3=cython.double) - * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<< - * x0, y0 = self._getCurrentPoint() - * x1, y1 = p1 - */ - __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne, 0, __pyx_n_s_MomentsPen__curveToOne, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__15)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 450, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_curveToOne, __pyx_t_9) < 0) __PYX_ERR(0, 450, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "fontTools/pens/momentsPen.py":17 - * - * - * class MomentsPen(BasePen): # <<<<<<<<<<<<<< - * def __init__(self, glyphset=None): - * BasePen.__init__(self, glyphset) - */ - __pyx_t_9 = __Pyx_Py3ClassCreate(__pyx_t_7, __pyx_n_s_MomentsPen, __pyx_t_1, __pyx_t_2, NULL, 0, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_MomentsPen, __pyx_t_9) < 0) __PYX_ERR(0, 17, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":869 - * - * - * if __name__ == "__main__": # <<<<<<<<<<<<<< - * from fontTools.misc.symfont import x, y, printGreenPen - * - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_name); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 869, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_10 = (__Pyx_PyUnicode_Equals(__pyx_t_1, __pyx_n_u_main, Py_EQ)); if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 869, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_10) { - - /* "fontTools/pens/momentsPen.py":870 - * - * if __name__ == "__main__": - * from fontTools.misc.symfont import x, y, printGreenPen # <<<<<<<<<<<<<< - * - * printGreenPen( - */ - __pyx_t_1 = PyList_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_n_s_x); - __Pyx_GIVEREF(__pyx_n_s_x); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_x); - __Pyx_INCREF(__pyx_n_s_y); - __Pyx_GIVEREF(__pyx_n_s_y); - PyList_SET_ITEM(__pyx_t_1, 1, __pyx_n_s_y); - __Pyx_INCREF(__pyx_n_s_printGreenPen); - __Pyx_GIVEREF(__pyx_n_s_printGreenPen); - PyList_SET_ITEM(__pyx_t_1, 2, __pyx_n_s_printGreenPen); - __pyx_t_7 = __Pyx_Import(__pyx_n_s_fontTools_misc_symfont, __pyx_t_1, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_7, __pyx_n_s_x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_x, __pyx_t_1) < 0) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_7, __pyx_n_s_y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_y, __pyx_t_1) < 0) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_7, __pyx_n_s_printGreenPen); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_printGreenPen, __pyx_t_1) < 0) __PYX_ERR(0, 870, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "fontTools/pens/momentsPen.py":872 - * from fontTools.misc.symfont import x, y, printGreenPen - * - * printGreenPen( # <<<<<<<<<<<<<< - * "MomentsPen", - * [ - */ - __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_printGreenPen); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 872, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "fontTools/pens/momentsPen.py":876 - * [ - * ("area", 1), - * ("momentX", x), # <<<<<<<<<<<<<< - * ("momentY", y), - * ("momentXX", x**2), - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 876, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 876, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_u_momentX); - __Pyx_GIVEREF(__pyx_n_u_momentX); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_n_u_momentX); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":877 - * ("area", 1), - * ("momentX", x), - * ("momentY", y), # <<<<<<<<<<<<<< - * ("momentXX", x**2), - * ("momentXY", x * y), - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 877, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = PyTuple_New(2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 877, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_INCREF(__pyx_n_u_momentY); - __Pyx_GIVEREF(__pyx_n_u_momentY); - PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_n_u_momentY); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/pens/momentsPen.py":878 - * ("momentX", x), - * ("momentY", y), - * ("momentXX", x**2), # <<<<<<<<<<<<<< - * ("momentXY", x * y), - * ("momentYY", y**2), - */ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 878, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_8 = PyNumber_Power(__pyx_t_1, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 878, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 878, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_n_u_momentXX); - __Pyx_GIVEREF(__pyx_n_u_momentXX); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_n_u_momentXX); - __Pyx_GIVEREF(__pyx_t_8); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_8); - __pyx_t_8 = 0; - - /* "fontTools/pens/momentsPen.py":879 - * ("momentY", y), - * ("momentXX", x**2), - * ("momentXY", x * y), # <<<<<<<<<<<<<< - * ("momentYY", y**2), - * ], - */ - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_x); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_y); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_12 = PyNumber_Multiply(__pyx_t_8, __pyx_t_11); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 879, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_INCREF(__pyx_n_u_momentXY); - __Pyx_GIVEREF(__pyx_n_u_momentXY); - PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_n_u_momentXY); - __Pyx_GIVEREF(__pyx_t_12); - PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_12); - __pyx_t_12 = 0; - - /* "fontTools/pens/momentsPen.py":880 - * ("momentXX", x**2), - * ("momentXY", x * y), - * ("momentYY", y**2), # <<<<<<<<<<<<<< - * ], - * ) - */ - __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_y); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 880, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_8 = PyNumber_Power(__pyx_t_12, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 880, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 880, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_INCREF(__pyx_n_u_momentYY); - __Pyx_GIVEREF(__pyx_n_u_momentYY); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_n_u_momentYY); - __Pyx_GIVEREF(__pyx_t_8); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_8); - __pyx_t_8 = 0; - - /* "fontTools/pens/momentsPen.py":874 - * printGreenPen( - * "MomentsPen", - * [ # <<<<<<<<<<<<<< - * ("area", 1), - * ("momentX", x), - */ - __pyx_t_8 = PyList_New(6); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 874, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_tuple__16); - __Pyx_GIVEREF(__pyx_tuple__16); - PyList_SET_ITEM(__pyx_t_8, 0, __pyx_tuple__16); - __Pyx_GIVEREF(__pyx_t_2); - PyList_SET_ITEM(__pyx_t_8, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_9); - PyList_SET_ITEM(__pyx_t_8, 2, __pyx_t_9); - __Pyx_GIVEREF(__pyx_t_1); - PyList_SET_ITEM(__pyx_t_8, 3, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_11); - PyList_SET_ITEM(__pyx_t_8, 4, __pyx_t_11); - __Pyx_GIVEREF(__pyx_t_12); - PyList_SET_ITEM(__pyx_t_8, 5, __pyx_t_12); - __pyx_t_2 = 0; - __pyx_t_9 = 0; - __pyx_t_1 = 0; - __pyx_t_11 = 0; - __pyx_t_12 = 0; - - /* "fontTools/pens/momentsPen.py":872 - * from fontTools.misc.symfont import x, y, printGreenPen - * - * printGreenPen( # <<<<<<<<<<<<<< - * "MomentsPen", - * [ - */ - __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 872, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __Pyx_INCREF(__pyx_n_u_MomentsPen); - __Pyx_GIVEREF(__pyx_n_u_MomentsPen); - PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_n_u_MomentsPen); - __Pyx_GIVEREF(__pyx_t_8); - PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_8); - __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_12, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 872, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "fontTools/pens/momentsPen.py":869 - * - * - * if __name__ == "__main__": # <<<<<<<<<<<<<< - * from fontTools.misc.symfont import x, y, printGreenPen - * - */ - } - - /* "fontTools/pens/momentsPen.py":1 - * from fontTools.pens.basePen import BasePen, OpenContourError # <<<<<<<<<<<<<< - * - * try: - */ - __pyx_t_8 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_8) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_XDECREF(__pyx_t_12); - if (__pyx_m) { - if (__pyx_d) { - __Pyx_AddTraceback("init fontTools.pens.momentsPen", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - Py_CLEAR(__pyx_m); - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init fontTools.pens.momentsPen"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); - if (unlikely(!result)) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - while (PyDict_Next(kwds, &pos, &key, &value)) { - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = (**name == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION < 3 - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif -#endif - -/* PyCFunctionFastCall */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { - PyCFunctionObject *func = (PyCFunctionObject*)func_obj; - PyCFunction meth = PyCFunction_GET_FUNCTION(func); - PyObject *self = PyCFunction_GET_SELF(func); - int flags = PyCFunction_GET_FLAGS(func); - assert(PyCFunction_Check(func)); - assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))); - assert(nargs >= 0); - assert(nargs == 0 || args != NULL); - /* _PyCFunction_FastCallDict() must not be called with an exception set, - because it may clear it (directly or indirectly) and so the - caller loses its exception */ - assert(!PyErr_Occurred()); - if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) { - return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL); - } else { - return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs); - } -} -#endif - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectSetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_setattro)) - return tp->tp_setattro(obj, attr_name, value); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_setattr)) - return tp->tp_setattr(obj, PyString_AS_STRING(attr_name), value); -#endif - return PyObject_SetAttr(obj, attr_name, value); -} -#endif - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallNoArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, NULL, 0); - } -#endif -#if defined(__Pyx_CyFunction_USED) && defined(NDEBUG) - if (likely(PyCFunction_Check(func) || __Pyx_CyFunction_Check(func))) -#else - if (likely(PyCFunction_Check(func))) -#endif - { - if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) { - return __Pyx_PyObject_CallMethO(func, NULL); - } - } - return __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL); -} -#endif - -/* PyObjectCallOneArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_New(1); - if (unlikely(!args)) return NULL; - Py_INCREF(arg); - PyTuple_SET_ITEM(args, 0, arg); - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, &arg, 1); - } -#endif - if (likely(PyCFunction_Check(func))) { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, arg); -#if CYTHON_FAST_PYCCALL - } else if (__Pyx_PyFastCFunction_Check(func)) { - return __Pyx_PyCFunction_FastCall(func, &arg, 1); -#endif - } - } - return __Pyx__PyObject_CallOneArg(func, arg); -} -#else -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_Pack(1, arg); - if (unlikely(!args)) return NULL; - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -#endif - -/* PyObjectCall2Args */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args, *result = NULL; - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyFunction_FastCall(function, args, 2); - } - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyCFunction_FastCall(function, args, 2); - } - #endif - args = PyTuple_New(2); - if (unlikely(!args)) goto done; - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 0, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 1, arg2); - Py_INCREF(function); - result = __Pyx_PyObject_Call(function, args, NULL); - Py_DECREF(args); - Py_DECREF(function); -done: - return result; -} - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} -#endif - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, - CYTHON_UNUSED PyObject *cause) { - __Pyx_PyThreadState_declare - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { -#if CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* IterFinish */ -static CYTHON_INLINE int __Pyx_IterFinish(void) { -#if CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* exc_type = tstate->curexc_type; - if (unlikely(exc_type)) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) { - PyObject *exc_value, *exc_tb; - exc_value = tstate->curexc_value; - exc_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; - Py_DECREF(exc_type); - Py_XDECREF(exc_value); - Py_XDECREF(exc_tb); - return 0; - } else { - return -1; - } - } - return 0; -#else - if (unlikely(PyErr_Occurred())) { - if (likely(PyErr_ExceptionMatches(PyExc_StopIteration))) { - PyErr_Clear(); - return 0; - } else { - return -1; - } - } - return 0; -#endif -} - -/* UnpackItemEndCheck */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected) { - if (unlikely(retval)) { - Py_DECREF(retval); - __Pyx_RaiseTooManyValuesError(expected); - return -1; - } - return __Pyx_IterFinish(); -} - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *empty_list = 0; - PyObject *module = 0; - PyObject *global_dict = 0; - PyObject *empty_dict = 0; - PyObject *list; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (!py_import) - goto bad; - #endif - if (from_list) - list = from_list; - else { - empty_list = PyList_New(0); - if (!empty_list) - goto bad; - list = empty_list; - } - global_dict = PyModule_GetDict(__pyx_m); - if (!global_dict) - goto bad; - empty_dict = PyDict_New(); - if (!empty_dict) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, 1); - if (!module) { - if (!PyErr_ExceptionMatches(PyExc_ImportError)) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (!py_level) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, global_dict, empty_dict, list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, level); - #endif - } - } -bad: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - Py_XDECREF(empty_list); - Py_XDECREF(empty_dict); - return module; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - #endif - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -#endif - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; - if (unlikely(PyTuple_Check(err))) - return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type, *local_value, *local_tb; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* CalculateMetaclass */ -static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases) { - Py_ssize_t i, nbases = PyTuple_GET_SIZE(bases); - for (i=0; i < nbases; i++) { - PyTypeObject *tmptype; - PyObject *tmp = PyTuple_GET_ITEM(bases, i); - tmptype = Py_TYPE(tmp); -#if PY_MAJOR_VERSION < 3 - if (tmptype == &PyClass_Type) - continue; -#endif - if (!metaclass) { - metaclass = tmptype; - continue; - } - if (PyType_IsSubtype(metaclass, tmptype)) - continue; - if (PyType_IsSubtype(tmptype, metaclass)) { - metaclass = tmptype; - continue; - } - PyErr_SetString(PyExc_TypeError, - "metaclass conflict: " - "the metaclass of a derived class " - "must be a (non-strict) subclass " - "of the metaclasses of all its bases"); - return NULL; - } - if (!metaclass) { -#if PY_MAJOR_VERSION < 3 - metaclass = &PyClass_Type; -#else - metaclass = &PyType_Type; -#endif - } - Py_INCREF((PyObject*) metaclass); - return (PyObject*) metaclass; -} - -/* FetchCommonType */ -static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) { - PyObject* fake_module; - PyTypeObject* cached_type = NULL; - fake_module = PyImport_AddModule((char*) "_cython_" CYTHON_ABI); - if (!fake_module) return NULL; - Py_INCREF(fake_module); - cached_type = (PyTypeObject*) PyObject_GetAttrString(fake_module, type->tp_name); - if (cached_type) { - if (!PyType_Check((PyObject*)cached_type)) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s is not a type object", - type->tp_name); - goto bad; - } - if (cached_type->tp_basicsize != type->tp_basicsize) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s has the wrong size, try recompiling", - type->tp_name); - goto bad; - } - } else { - if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad; - PyErr_Clear(); - if (PyType_Ready(type) < 0) goto bad; - if (PyObject_SetAttrString(fake_module, type->tp_name, (PyObject*) type) < 0) - goto bad; - Py_INCREF(type); - cached_type = type; - } -done: - Py_DECREF(fake_module); - return cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} - -/* CythonFunctionShared */ -#include -static PyObject * -__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *closure) -{ - if (unlikely(op->func_doc == NULL)) { - if (op->func.m_ml->ml_doc) { -#if PY_MAJOR_VERSION >= 3 - op->func_doc = PyUnicode_FromString(op->func.m_ml->ml_doc); -#else - op->func_doc = PyString_FromString(op->func.m_ml->ml_doc); -#endif - if (unlikely(op->func_doc == NULL)) - return NULL; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - Py_INCREF(op->func_doc); - return op->func_doc; -} -static int -__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp = op->func_doc; - if (value == NULL) { - value = Py_None; - } - Py_INCREF(value); - op->func_doc = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - if (unlikely(op->func_name == NULL)) { -#if PY_MAJOR_VERSION >= 3 - op->func_name = PyUnicode_InternFromString(op->func.m_ml->ml_name); -#else - op->func_name = PyString_InternFromString(op->func.m_ml->ml_name); -#endif - if (unlikely(op->func_name == NULL)) - return NULL; - } - Py_INCREF(op->func_name); - return op->func_name; -} -static int -__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp; -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - tmp = op->func_name; - Py_INCREF(value); - op->func_name = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - Py_INCREF(op->func_qualname); - return op->func_qualname; -} -static int -__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp; -#if PY_MAJOR_VERSION >= 3 - if (unlikely(value == NULL || !PyUnicode_Check(value))) -#else - if (unlikely(value == NULL || !PyString_Check(value))) -#endif - { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - tmp = op->func_qualname; - Py_INCREF(value); - op->func_qualname = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_self(__pyx_CyFunctionObject *m, CYTHON_UNUSED void *closure) -{ - PyObject *self; - self = m->func_closure; - if (self == NULL) - self = Py_None; - Py_INCREF(self); - return self; -} -static PyObject * -__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - if (unlikely(op->func_dict == NULL)) { - op->func_dict = PyDict_New(); - if (unlikely(op->func_dict == NULL)) - return NULL; - } - Py_INCREF(op->func_dict); - return op->func_dict; -} -static int -__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context) -{ - PyObject *tmp; - if (unlikely(value == NULL)) { - PyErr_SetString(PyExc_TypeError, - "function's dictionary may not be deleted"); - return -1; - } - if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "setting function's dictionary to a non-dict"); - return -1; - } - tmp = op->func_dict; - Py_INCREF(value); - op->func_dict = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - Py_INCREF(op->func_globals); - return op->func_globals; -} -static PyObject * -__Pyx_CyFunction_get_closure(CYTHON_UNUSED __pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - Py_INCREF(Py_None); - return Py_None; -} -static PyObject * -__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) -{ - PyObject* result = (op->func_code) ? op->func_code : Py_None; - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) { - int result = 0; - PyObject *res = op->defaults_getter((PyObject *) op); - if (unlikely(!res)) - return -1; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - op->defaults_tuple = PyTuple_GET_ITEM(res, 0); - Py_INCREF(op->defaults_tuple); - op->defaults_kwdict = PyTuple_GET_ITEM(res, 1); - Py_INCREF(op->defaults_kwdict); - #else - op->defaults_tuple = PySequence_ITEM(res, 0); - if (unlikely(!op->defaults_tuple)) result = -1; - else { - op->defaults_kwdict = PySequence_ITEM(res, 1); - if (unlikely(!op->defaults_kwdict)) result = -1; - } - #endif - Py_DECREF(res); - return result; -} -static int -__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) { - PyObject* tmp; - if (!value) { - value = Py_None; - } else if (value != Py_None && !PyTuple_Check(value)) { - PyErr_SetString(PyExc_TypeError, - "__defaults__ must be set to a tuple object"); - return -1; - } - Py_INCREF(value); - tmp = op->defaults_tuple; - op->defaults_tuple = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) { - PyObject* result = op->defaults_tuple; - if (unlikely(!result)) { - if (op->defaults_getter) { - if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL; - result = op->defaults_tuple; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) { - PyObject* tmp; - if (!value) { - value = Py_None; - } else if (value != Py_None && !PyDict_Check(value)) { - PyErr_SetString(PyExc_TypeError, - "__kwdefaults__ must be set to a dict object"); - return -1; - } - Py_INCREF(value); - tmp = op->defaults_kwdict; - op->defaults_kwdict = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) { - PyObject* result = op->defaults_kwdict; - if (unlikely(!result)) { - if (op->defaults_getter) { - if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL; - result = op->defaults_kwdict; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) { - PyObject* tmp; - if (!value || value == Py_None) { - value = NULL; - } else if (!PyDict_Check(value)) { - PyErr_SetString(PyExc_TypeError, - "__annotations__ must be set to a dict object"); - return -1; - } - Py_XINCREF(value); - tmp = op->func_annotations; - op->func_annotations = value; - Py_XDECREF(tmp); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) { - PyObject* result = op->func_annotations; - if (unlikely(!result)) { - result = PyDict_New(); - if (unlikely(!result)) return NULL; - op->func_annotations = result; - } - Py_INCREF(result); - return result; -} -static PyGetSetDef __pyx_CyFunction_getsets[] = { - {(char *) "func_doc", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "__doc__", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {(char *) "func_name", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__name__", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {(char *) "__qualname__", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0}, - {(char *) "__self__", (getter)__Pyx_CyFunction_get_self, 0, 0, 0}, - {(char *) "func_dict", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "__dict__", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {(char *) "func_globals", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "__globals__", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {(char *) "func_closure", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "__closure__", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {(char *) "func_code", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "__code__", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {(char *) "func_defaults", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__defaults__", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {(char *) "__kwdefaults__", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0}, - {(char *) "__annotations__", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0}, - {0, 0, 0, 0, 0} -}; -static PyMemberDef __pyx_CyFunction_members[] = { - {(char *) "__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), PY_WRITE_RESTRICTED, 0}, - {0, 0, 0, 0, 0} -}; -static PyObject * -__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, CYTHON_UNUSED PyObject *args) -{ -#if PY_MAJOR_VERSION >= 3 - Py_INCREF(m->func_qualname); - return m->func_qualname; -#else - return PyString_FromString(m->func.m_ml->ml_name); -#endif -} -static PyMethodDef __pyx_CyFunction_methods[] = { - {"__reduce__", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0}, - {0, 0, 0, 0} -}; -#if PY_VERSION_HEX < 0x030500A0 -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist) -#else -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func.m_weakreflist) -#endif -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject *op, PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - if (unlikely(op == NULL)) - return NULL; - op->flags = flags; - __Pyx_CyFunction_weakreflist(op) = NULL; - op->func.m_ml = ml; - op->func.m_self = (PyObject *) op; - Py_XINCREF(closure); - op->func_closure = closure; - Py_XINCREF(module); - op->func.m_module = module; - op->func_dict = NULL; - op->func_name = NULL; - Py_INCREF(qualname); - op->func_qualname = qualname; - op->func_doc = NULL; - op->func_classobj = NULL; - op->func_globals = globals; - Py_INCREF(op->func_globals); - Py_XINCREF(code); - op->func_code = code; - op->defaults_pyobjects = 0; - op->defaults_size = 0; - op->defaults = NULL; - op->defaults_tuple = NULL; - op->defaults_kwdict = NULL; - op->defaults_getter = NULL; - op->func_annotations = NULL; - return (PyObject *) op; -} -static int -__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m) -{ - Py_CLEAR(m->func_closure); - Py_CLEAR(m->func.m_module); - Py_CLEAR(m->func_dict); - Py_CLEAR(m->func_name); - Py_CLEAR(m->func_qualname); - Py_CLEAR(m->func_doc); - Py_CLEAR(m->func_globals); - Py_CLEAR(m->func_code); - Py_CLEAR(m->func_classobj); - Py_CLEAR(m->defaults_tuple); - Py_CLEAR(m->defaults_kwdict); - Py_CLEAR(m->func_annotations); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_XDECREF(pydefaults[i]); - PyObject_Free(m->defaults); - m->defaults = NULL; - } - return 0; -} -static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - if (__Pyx_CyFunction_weakreflist(m) != NULL) - PyObject_ClearWeakRefs((PyObject *) m); - __Pyx_CyFunction_clear(m); - PyObject_GC_Del(m); -} -static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - PyObject_GC_UnTrack(m); - __Pyx__CyFunction_dealloc(m); -} -static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg) -{ - Py_VISIT(m->func_closure); - Py_VISIT(m->func.m_module); - Py_VISIT(m->func_dict); - Py_VISIT(m->func_name); - Py_VISIT(m->func_qualname); - Py_VISIT(m->func_doc); - Py_VISIT(m->func_globals); - Py_VISIT(m->func_code); - Py_VISIT(m->func_classobj); - Py_VISIT(m->defaults_tuple); - Py_VISIT(m->defaults_kwdict); - if (m->defaults) { - PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m); - int i; - for (i = 0; i < m->defaults_pyobjects; i++) - Py_VISIT(pydefaults[i]); - } - return 0; -} -static PyObject *__Pyx_CyFunction_descr_get(PyObject *func, PyObject *obj, PyObject *type) -{ -#if PY_MAJOR_VERSION < 3 - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - if (m->flags & __Pyx_CYFUNCTION_STATICMETHOD) { - Py_INCREF(func); - return func; - } - if (m->flags & __Pyx_CYFUNCTION_CLASSMETHOD) { - if (type == NULL) - type = (PyObject *)(Py_TYPE(obj)); - return __Pyx_PyMethod_New(func, type, (PyObject *)(Py_TYPE(type))); - } - if (obj == Py_None) - obj = NULL; -#endif - return __Pyx_PyMethod_New(func, obj, type); -} -static PyObject* -__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op) -{ -#if PY_MAJOR_VERSION >= 3 - return PyUnicode_FromFormat("", - op->func_qualname, (void *)op); -#else - return PyString_FromFormat("", - PyString_AsString(op->func_qualname), (void *)op); -#endif -} -static PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) { - PyCFunctionObject* f = (PyCFunctionObject*)func; - PyCFunction meth = f->m_ml->ml_meth; - Py_ssize_t size; - switch (f->m_ml->ml_flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) { - case METH_VARARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) - return (*meth)(self, arg); - break; - case METH_VARARGS | METH_KEYWORDS: - return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw); - case METH_NOARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 0)) - return (*meth)(self, NULL); - PyErr_Format(PyExc_TypeError, - "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - case METH_O: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { - size = PyTuple_GET_SIZE(arg); - if (likely(size == 1)) { - PyObject *result, *arg0; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - arg0 = PyTuple_GET_ITEM(arg, 0); - #else - arg0 = PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL; - #endif - result = (*meth)(self, arg0); - #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(arg0); - #endif - return result; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)", - f->m_ml->ml_name, size); - return NULL; - } - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags in " - "__Pyx_CyFunction_Call. METH_OLDARGS is no " - "longer supported!"); - return NULL; - } - PyErr_Format(PyExc_TypeError, "%.200s() takes no keyword arguments", - f->m_ml->ml_name); - return NULL; -} -static CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) { - return __Pyx_CyFunction_CallMethod(func, ((PyCFunctionObject*)func)->m_self, arg, kw); -} -static PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) { - PyObject *result; - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func; - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - Py_ssize_t argc; - PyObject *new_args; - PyObject *self; - argc = PyTuple_GET_SIZE(args); - new_args = PyTuple_GetSlice(args, 1, argc); - if (unlikely(!new_args)) - return NULL; - self = PyTuple_GetItem(args, 0); - if (unlikely(!self)) { - Py_DECREF(new_args); -#if PY_MAJOR_VERSION > 2 - PyErr_Format(PyExc_TypeError, - "unbound method %.200S() needs an argument", - cyfunc->func_qualname); -#else - PyErr_SetString(PyExc_TypeError, - "unbound method needs an argument"); -#endif - return NULL; - } - result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw); - Py_DECREF(new_args); - } else { - result = __Pyx_CyFunction_Call(func, args, kw); - } - return result; -} -static PyTypeObject __pyx_CyFunctionType_type = { - PyVarObject_HEAD_INIT(0, 0) - "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, - (destructor) __Pyx_CyFunction_dealloc, - 0, - 0, - 0, -#if PY_MAJOR_VERSION < 3 - 0, -#else - 0, -#endif - (reprfunc) __Pyx_CyFunction_repr, - 0, - 0, - 0, - 0, - __Pyx_CyFunction_CallAsMethod, - 0, - 0, - 0, - 0, - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, - 0, - (traverseproc) __Pyx_CyFunction_traverse, - (inquiry) __Pyx_CyFunction_clear, - 0, -#if PY_VERSION_HEX < 0x030500A0 - offsetof(__pyx_CyFunctionObject, func_weakreflist), -#else - offsetof(PyCFunctionObject, m_weakreflist), -#endif - 0, - 0, - __pyx_CyFunction_methods, - __pyx_CyFunction_members, - __pyx_CyFunction_getsets, - 0, - 0, - __Pyx_CyFunction_descr_get, - 0, - offsetof(__pyx_CyFunctionObject, func_dict), - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, -#if PY_VERSION_HEX >= 0x030400a1 - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, -#endif -#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, -#endif -#if PY_VERSION_HEX >= 0x030C0000 - 0, -#endif -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, -#endif -}; -static int __pyx_CyFunction_init(void) { - __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type); - if (unlikely(__pyx_CyFunctionType == NULL)) { - return -1; - } - return 0; -} -static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults = PyObject_Malloc(size); - if (unlikely(!m->defaults)) - return PyErr_NoMemory(); - memset(m->defaults, 0, size); - m->defaults_pyobjects = pyobjects; - m->defaults_size = size; - return m->defaults; -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_tuple = tuple; - Py_INCREF(tuple); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_kwdict = dict; - Py_INCREF(dict); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->func_annotations = dict; - Py_INCREF(dict); -} - -/* CythonFunction */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyObject *op = __Pyx_CyFunction_Init( - PyObject_GC_New(__pyx_CyFunctionObject, __pyx_CyFunctionType), - ml, flags, qualname, closure, module, globals, code - ); - if (likely(op)) { - PyObject_GC_Track(op); - } - return op; -} - -/* Py3ClassCreate */ -static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, - PyObject *qualname, PyObject *mkw, PyObject *modname, PyObject *doc) { - PyObject *ns; - if (metaclass) { - PyObject *prep = __Pyx_PyObject_GetAttrStr(metaclass, __pyx_n_s_prepare); - if (prep) { - PyObject *pargs = PyTuple_Pack(2, name, bases); - if (unlikely(!pargs)) { - Py_DECREF(prep); - return NULL; - } - ns = PyObject_Call(prep, pargs, mkw); - Py_DECREF(prep); - Py_DECREF(pargs); - } else { - if (unlikely(!PyErr_ExceptionMatches(PyExc_AttributeError))) - return NULL; - PyErr_Clear(); - ns = PyDict_New(); - } - } else { - ns = PyDict_New(); - } - if (unlikely(!ns)) - return NULL; - if (unlikely(PyObject_SetItem(ns, __pyx_n_s_module, modname) < 0)) goto bad; - if (unlikely(PyObject_SetItem(ns, __pyx_n_s_qualname, qualname) < 0)) goto bad; - if (unlikely(doc && PyObject_SetItem(ns, __pyx_n_s_doc, doc) < 0)) goto bad; - return ns; -bad: - Py_DECREF(ns); - return NULL; -} -static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, - PyObject *dict, PyObject *mkw, - int calculate_metaclass, int allow_py2_metaclass) { - PyObject *result, *margs; - PyObject *owned_metaclass = NULL; - if (allow_py2_metaclass) { - owned_metaclass = PyObject_GetItem(dict, __pyx_n_s_metaclass); - if (owned_metaclass) { - metaclass = owned_metaclass; - } else if (likely(PyErr_ExceptionMatches(PyExc_KeyError))) { - PyErr_Clear(); - } else { - return NULL; - } - } - if (calculate_metaclass && (!metaclass || PyType_Check(metaclass))) { - metaclass = __Pyx_CalculateMetaclass((PyTypeObject*) metaclass, bases); - Py_XDECREF(owned_metaclass); - if (unlikely(!metaclass)) - return NULL; - owned_metaclass = metaclass; - } - margs = PyTuple_Pack(3, name, bases, dict); - if (unlikely(!margs)) { - result = NULL; - } else { - result = PyObject_Call(metaclass, margs, mkw); - Py_DECREF(margs); - } - Py_XDECREF(owned_metaclass); - return result; -} - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000) - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(CYTHON_UNUSED PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - (void) PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - #if PY_MAJOR_VERSION < 3 - PyObject *py_srcfile = NULL; - py_srcfile = PyString_FromString(filename); - if (!py_srcfile) goto bad; - #endif - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - if (!py_funcname) goto bad; - #endif - } - #if PY_MAJOR_VERSION < 3 - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - #else - py_code = PyCode_NewEmpty(filename, funcname, py_line); - #endif - Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline - return py_code; -bad: - Py_XDECREF(py_funcname); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_srcfile); - #endif - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject *ptype, *pvalue, *ptraceback; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) { - /* If the code object creation fails, then we should clear the - fetched exception references and propagate the new exception */ - Py_XDECREF(ptype); - Py_XDECREF(pvalue); - Py_XDECREF(ptraceback); - goto bad; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} - -/* CIntToPy */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* CIntFromPyVerify */ -#define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* CIntFromPy */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(long) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(long) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) - case -2: - if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } -#endif - if (sizeof(long) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(int) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(int) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) - case -2: - if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } -#endif - if (sizeof(int) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = a->tp_base; - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; - if (!res) { - res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } - return res; -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i '9'); - break; - } - if (rt_from_call[i] != ctversion[i]) { - same = 0; - break; - } - } - if (!same) { - char rtversion[5] = {'\0'}; - char message[200]; - for (i=0; i<4; ++i) { - if (rt_from_call[i] == '.') { - if (found_dot) break; - found_dot = 1; - } else if (rt_from_call[i] < '0' || rt_from_call[i] > '9') { - break; - } - rtversion[i] = rt_from_call[i]; - } - PyOS_snprintf(message, sizeof(message), - "compiletime version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION < 3 - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - #else - if (t->is_unicode | t->is_str) { - if (t->intern) { - *t->p = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); - } else { - *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - #endif - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type %.200s). " - "The ability to return an instance of a strict subclass of int " - "is deprecated, and may be removed in a future version of Python.", - Py_TYPE(result)->tp_name)) { - Py_DECREF(result); - return NULL; - } - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type %.200s)", - type_name, type_name, Py_TYPE(result)->tp_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -#endif /* Py_PYTHON_H */ diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_p_r_e_p.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_p_r_e_p.py deleted file mode 100644 index b4b92f3e924ba2f20ade9a6cca45ce78284ffe21..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_p_r_e_p.py +++ /dev/null @@ -1,7 +0,0 @@ -from fontTools import ttLib - -superclass = ttLib.getTableClass("fpgm") - - -class table__p_r_e_p(superclass): - pass diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/archive.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/archive.py deleted file mode 100644 index dc5c1490b972c592fd3eb9aaeb30b589e384ccb7..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/archive.py +++ /dev/null @@ -1,73 +0,0 @@ -from fsspec import AbstractFileSystem -from fsspec.utils import tokenize - - -class AbstractArchiveFileSystem(AbstractFileSystem): - """ - A generic superclass for implementing Archive-based filesystems. - - Currently, it is shared amongst - :class:`~fsspec.implementations.zip.ZipFileSystem`, - :class:`~fsspec.implementations.libarchive.LibArchiveFileSystem` and - :class:`~fsspec.implementations.tar.TarFileSystem`. - """ - - def __str__(self): - return "" % (type(self).__name__, id(self)) - - __repr__ = __str__ - - def ukey(self, path): - return tokenize(path, self.fo, self.protocol) - - def _all_dirnames(self, paths): - """Returns *all* directory names for each path in paths, including intermediate - ones. - - Parameters - ---------- - paths: Iterable of path strings - """ - if len(paths) == 0: - return set() - - dirnames = {self._parent(path) for path in paths} - {self.root_marker} - return dirnames | self._all_dirnames(dirnames) - - def info(self, path, **kwargs): - self._get_dirs() - path = self._strip_protocol(path) - if path in {"", "/"} and self.dir_cache: - return {"name": "/", "type": "directory", "size": 0} - if path in self.dir_cache: - return self.dir_cache[path] - elif path + "/" in self.dir_cache: - return self.dir_cache[path + "/"] - else: - raise FileNotFoundError(path) - - def ls(self, path, detail=True, **kwargs): - self._get_dirs() - paths = {} - for p, f in self.dir_cache.items(): - p = p.rstrip("/") - if "/" in p: - root = p.rsplit("/", 1)[0] - else: - root = "" - if root == path.rstrip("/"): - paths[p] = f - elif all( - (a == b) - for a, b in zip(path.split("/"), [""] + p.strip("/").split("/")) - ): - # root directory entry - ppath = p.rstrip("/").split("/", 1)[0] - if ppath not in paths: - out = {"name": ppath + "/", "size": 0, "type": "directory"} - paths[ppath] = out - out = sorted(paths.values(), key=lambda _: _["name"]) - if detail: - return out - else: - return [f["name"] for f in out] diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3ca142e0.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3ca142e0.css deleted file mode 100644 index 77ebe6c1fea2e3557f76088bb9f5c30e2cfdb72a..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3ca142e0.css +++ /dev/null @@ -1 +0,0 @@ -.spacer.svelte-1kspdo{display:inline-block;width:0;height:0}.json-node.svelte-1kspdo{display:inline;color:var(--body-text-color);line-height:var(--line-sm);font-family:var(--font-mono)}.expand-array.svelte-1kspdo{border:1px solid var(--border-color-primary);border-radius:var(--radius-sm);background:var(--background-fill-secondary);padding:0 var(--size-1);color:var(--body-text-color)}.expand-array.svelte-1kspdo:hover{background:var(--background-fill-primary)}.children.svelte-1kspdo{padding-left:var(--size-4)}.json-item.svelte-1kspdo{display:inline}.null.svelte-1kspdo{color:var(--body-text-color-subdued)}.string.svelte-1kspdo{color:var(--color-green-500)}.number.svelte-1kspdo{color:var(--color-blue-500)}.bool.svelte-1kspdo{color:var(--color-red-500)}.json-holder.svelte-1trjy9a{padding:var(--size-2)}button.svelte-1trjy9a{display:flex;position:absolute;top:var(--block-label-margin);right:var(--block-label-margin);align-items:center;box-shadow:var(--shadow-drop);border:1px solid var(--border-color-primary);border-top:none;border-right:none;border-radius:var(--block-label-right-radius);background:var(--block-label-background-fill);padding:5px;width:22px;height:22px;overflow:hidden;color:var(--block-label-text-color);font:var(--font);font-size:var(--button-small-text-size)} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ff630227.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ff630227.js deleted file mode 100644 index b751b8d21ad15166f14450721149a6e971887d91..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ff630227.js +++ /dev/null @@ -1,3 +0,0 @@ -import{S as I,e as J,s as K,J as U,K as u,p as j,M as y,n as P,A as E,N as R,O as V,P as D,L as F,Z as Le,ar as je,R as G,G as T,m as Z,V as Y,B as be,C as Ee,av as Q,aj as Ae,X as Ce,k as O,o as X,z as B,v as S,x as q,E as Me,ae as ze,q as Te,r as Be,u as pe,y as ke}from"./index-3370be2a.js";import{U as Se}from"./Upload-f29b2460.js";import{M as Ue}from"./ModifyUpload-d8fc50ab.js";import{B as Ne}from"./Button-89624748.js";import{B as Fe}from"./BlockLabel-56db415e.js";import{E as Oe}from"./Empty-585389a4.js";import{g as Xe}from"./color-baaf9df5.js";import{a as qe}from"./csv-b0b7514a.js";import{Z as x,_ as $,l as ee}from"./linear-58a44b5e.js";import{U as He}from"./UploadText-28892309.js";import"./Blocks-f0129fcd.js";import"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";import"./IconButton-abe5ede9.js";import"./dsv-576afacd.js";function Pe(l){let e,n,t;return{c(){e=U("svg"),n=U("path"),t=U("path"),u(n,"d","M28.828 3.172a4.094 4.094 0 0 0-5.656 0L4.05 22.292A6.954 6.954 0 0 0 2 27.242V30h2.756a6.952 6.952 0 0 0 4.95-2.05L28.828 8.829a3.999 3.999 0 0 0 0-5.657zM10.91 18.26l2.829 2.829l-2.122 2.121l-2.828-2.828zm-2.619 8.276A4.966 4.966 0 0 1 4.756 28H4v-.759a4.967 4.967 0 0 1 1.464-3.535l1.91-1.91l2.829 2.828zM27.415 7.414l-12.261 12.26l-2.829-2.828l12.262-12.26a2.047 2.047 0 0 1 2.828 0a2 2 0 0 1 0 2.828z"),u(n,"fill","currentColor"),u(t,"d","M6.5 15a3.5 3.5 0 0 1-2.475-5.974l3.5-3.5a1.502 1.502 0 0 0 0-2.121a1.537 1.537 0 0 0-2.121 0L3.415 5.394L2 3.98l1.99-1.988a3.585 3.585 0 0 1 4.95 0a3.504 3.504 0 0 1 0 4.949L5.439 10.44a1.502 1.502 0 0 0 0 2.121a1.537 1.537 0 0 0 2.122 0l4.024-4.024L13 9.95l-4.025 4.024A3.475 3.475 0 0 1 6.5 15z"),u(t,"fill","currentColor"),u(e,"width","1em"),u(e,"height","1em"),u(e,"viewBox","0 0 32 32")},m(a,s){j(a,e,s),y(e,n),y(e,t)},p:P,i:P,o:P,d(a){a&&E(e)}}}let ye=class extends I{constructor(e){super(),J(this,e,null,Pe,K,{})}};function le(l){let e;return Array.isArray(l)?e=l.reduce((n,{values:t})=>[...n,...t.map(({y:a})=>a)],[]):e=l.values,[Math.min(...e),Math.max(...e)]}function te(l,e,n){const t=Object.entries(l[0]).reduce((a,s,o)=>(!e&&o===0||e&&s[0]===e?a.x.name=s[0]:(!n||n&&n.includes(s[0]))&&a.y.push({name:s[0],values:[]}),a),{x:{name:"",values:[]},y:[]});for(let a=0;al[6].call(e))},m(o,_){j(o,e,_),y(e,n),y(e,t),y(e,a),s=je(e,l[6].bind(e))},p(o,[_]){_&8&&F(n,"background",o[3]),_&1&&G(a,o[0]),_&36&&F(e,"top",o[2]-o[5]/2+"px"),_&18&&F(e,"left",o[1]-o[4]-7+"px")},i:P,o:P,d(o){o&&E(e),s()}}}function Ve(l,e,n){let{text:t}=e,{x:a}=e,{y:s}=e,{color:o}=e,_,i;function v(){_=this.offsetWidth,i=this.offsetHeight,n(4,_),n(5,i)}return l.$$set=g=>{"text"in g&&n(0,t=g.text),"x"in g&&n(1,a=g.x),"y"in g&&n(2,s=g.y),"color"in g&&n(3,o=g.color)},[t,a,s,o,_,i,v]}class Ye extends I{constructor(e){super(),J(this,e,Ve,Re,K,{text:0,x:1,y:2,color:3})}}function Ze(l,{color:e,text:n}){let t;function a(i){return t=new Ye({props:{text:n,x:i.pageX,y:i.pageY,color:e},target:document.body}),i}function s(i){t.$set({x:i.pageX,y:i.pageY})}function o(){t.$destroy()}const _=l;return _.addEventListener("mouseover",a),_.addEventListener("mouseleave",o),_.addEventListener("mousemove",s),{destroy(){_.removeEventListener("mouseover",a),_.removeEventListener("mouseleave",o),_.removeEventListener("mousemove",s)}}}function ne(l,e,n){const t=l.slice();t[16]=e[n].name,t[17]=e[n].values;const a=t[8][t[16]];return t[18]=a,t}function ae(l,e,n){const t=l.slice();return t[0]=e[n].x,t[1]=e[n].y,t}function oe(l,e,n){const t=l.slice();t[16]=e[n].name,t[17]=e[n].values;const a=t[8][t[16]];return t[18]=a,t}function se(l,e,n){const t=l.slice();return t[0]=e[n].x,t[1]=e[n].y,t}function re(l,e,n){const t=l.slice();return t[27]=e[n],t}function ie(l,e,n){const t=l.slice();return t[27]=e[n],t}function fe(l,e,n){const t=l.slice();return t[16]=e[n].name,t}function _e(l){let e,n,t,a=l[16]+"",s,o;return{c(){e=R("div"),n=R("span"),t=V(),s=D(a),o=V(),u(n,"class","legend-box svelte-1mjxput"),F(n,"background-color",l[8][l[16]]),u(e,"class","legend-item svelte-1mjxput")},m(_,i){j(_,e,i),y(e,n),y(e,t),y(e,s),y(e,o)},p(_,i){i[0]&260&&F(n,"background-color",_[8][_[16]]),i[0]&4&&a!==(a=_[16]+"")&&G(s,a)},d(_){_&&E(e)}}}function ue(l){let e,n,t,a,s,o,_=l[27]+"",i,v,g;return{c(){e=U("line"),o=U("text"),i=D(_),u(e,"stroke-width","0.5"),u(e,"x1",n=l[5](l[27])),u(e,"x2",t=l[5](l[27])),u(e,"y1",a=l[4](l[9][0]l[9][l[9].length-1]?l[6][1]:l[9][l[9].length-1])),u(e,"stroke","#aaa"),u(o,"class","label-text svelte-1mjxput"),u(o,"text-anchor","middle"),u(o,"x",v=l[5](l[27])),u(o,"y",g=l[4](l[9][0])+30)},m(f,h){j(f,e,h),j(f,o,h),y(o,i)},p(f,h){h[0]&1056&&n!==(n=f[5](f[27]))&&u(e,"x1",n),h[0]&1056&&t!==(t=f[5](f[27]))&&u(e,"x2",t),h[0]&592&&a!==(a=f[4](f[9][0]f[9][f[9].length-1]?f[6][1]:f[9][f[9].length-1]))&&u(e,"y2",s),h[0]&1024&&_!==(_=f[27]+"")&&G(i,_),h[0]&1056&&v!==(v=f[5](f[27]))&&u(o,"x",v),h[0]&528&&g!==(g=f[4](f[9][0])+30)&&u(o,"y",g)},d(f){f&&(E(e),E(o))}}}function ce(l){let e,n,t,a,s,o,_=l[27]+"",i,v,g;return{c(){e=U("line"),o=U("text"),i=D(_),u(e,"stroke-width","0.5"),u(e,"y1",n=l[4](l[27])),u(e,"y2",t=l[4](l[27])),u(e,"x1",a=l[5](l[10][0]l[10][l[10].length-1]?l[7][1]:l[10][l[10].length-1])),u(e,"stroke","#aaa"),u(o,"class","label-text svelte-1mjxput"),u(o,"text-anchor","end"),u(o,"y",v=l[4](l[27])+4),u(o,"x",g=l[5](l[10][0])-20)},m(f,h){j(f,e,h),j(f,o,h),y(o,i)},p(f,h){h[0]&528&&n!==(n=f[4](f[27]))&&u(e,"y1",n),h[0]&528&&t!==(t=f[4](f[27]))&&u(e,"y2",t),h[0]&1184&&a!==(a=f[5](f[10][0]f[10][f[10].length-1]?f[7][1]:f[10][f[10].length-1]))&&u(e,"x2",s),h[0]&512&&_!==(_=f[27]+"")&&G(i,_),h[0]&528&&v!==(v=f[4](f[27])+4)&&u(o,"y",v),h[0]&1056&&g!==(g=f[5](f[10][0])-20)&&u(o,"x",g)},d(f){f&&(E(e),E(o))}}}function me(l){let e,n,t,a,s,o,_=l[6][1]+"",i,v,g;return{c(){e=U("line"),o=U("text"),i=D(_),u(e,"stroke-width","0.5"),u(e,"y1",n=l[4](l[6][1])),u(e,"y2",t=l[4](l[6][1])),u(e,"x1",a=l[5](l[10][0])),u(e,"x2",s=l[5](l[7][1])),u(e,"stroke","#aaa"),u(o,"class","label-text svelte-1mjxput"),u(o,"text-anchor","end"),u(o,"y",v=l[4](l[6][1])+4),u(o,"x",g=l[5](l[10][0])-20)},m(f,h){j(f,e,h),j(f,o,h),y(o,i)},p(f,h){h[0]&80&&n!==(n=f[4](f[6][1]))&&u(e,"y1",n),h[0]&80&&t!==(t=f[4](f[6][1]))&&u(e,"y2",t),h[0]&1056&&a!==(a=f[5](f[10][0]))&&u(e,"x1",a),h[0]&160&&s!==(s=f[5](f[7][1]))&&u(e,"x2",s),h[0]&64&&_!==(_=f[6][1]+"")&&G(i,_),h[0]&80&&v!==(v=f[4](f[6][1])+4)&&u(o,"y",v),h[0]&1056&&g!==(g=f[5](f[10][0])-20)&&u(o,"x",g)},d(f){f&&(E(e),E(o))}}}function he(l){let e,n,t,a;return{c(){e=U("circle"),u(e,"r","3.5"),u(e,"cx",n=l[5](l[0])),u(e,"cy",t=l[4](l[1])),u(e,"stroke-width","1.5"),u(e,"stroke",a=l[18]),u(e,"fill","none")},m(s,o){j(s,e,o)},p(s,o){o[0]&36&&n!==(n=s[5](s[0]))&&u(e,"cx",n),o[0]&20&&t!==(t=s[4](s[1]))&&u(e,"cy",t),o[0]&260&&a!==(a=s[18])&&u(e,"stroke",a)},d(s){s&&E(e)}}}function ge(l){let e,n,t,a=T(l[17]),s=[];for(let o=0;ol[9][l[9].length-1]&&me(l),C=T(l[2]),L=[];for(let c=0;cc[9][c[9].length-1]?d?d.p(c,z):(d=me(c),d.c(),d.m(s,null)):d&&(d.d(1),d=null),z[0]&308){C=T(c[2]);let r;for(r=0;r{b("process",{x:t,y:a})});const k=({x:d,y:C})=>[_(d),i(C)];return l.$$set=d=>{"value"in d&&n(11,f=d.value),"x"in d&&n(0,h=d.x),"y"in d&&n(1,A=d.y),"colors"in d&&n(12,m=d.colors)},l.$$.update=()=>{l.$$.dirty[0]&2051&&n(3,{x:t,y:a}=te(typeof f=="string"?qe(f):f,h,A),t,(n(2,a),n(11,f),n(0,h),n(1,A))),l.$$.dirty[0]&8&&n(7,s=le(t)),l.$$.dirty[0]&4&&n(6,o=le(a)),l.$$.dirty[0]&128&&n(5,_=x(s,[0,600]).nice()),l.$$.dirty[0]&64&&n(4,i=x(o,[350,0]).nice()),l.$$.dirty[0]&32&&n(10,v=_.ticks(8)),l.$$.dirty[0]&16&&n(9,g=i.ticks(8)),l.$$.dirty[0]&4&&n(8,p=a.reduce((d,C,L)=>({...d,[C.name]:N(L)}),{}))},[h,A,a,t,i,_,o,s,p,g,v,f,m,k]}class we extends I{constructor(e){super(),J(this,e,Ge,De,K,{value:11,x:0,y:1,colors:12},null,[-1,-1])}}function Ie(l){let e,n;return e=new Se({props:{filetype:"text/csv",include_file_metadata:!1,$$slots:{default:[We]},$$scope:{ctx:l}}}),e.$on("load",l[19]),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},p(t,a){const s={};a&8388608&&(s.$$scope={dirty:a,ctx:t}),e.$set(s)},i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function Je(l){let e,n,t,a,s;return n=new Ue({}),n.$on("clear",l[17]),a=new we({props:{value:l[14],y:l[4],x:l[5],colors:l[9]}}),a.$on("process",l[18]),{c(){e=R("div"),O(n.$$.fragment),t=V(),O(a.$$.fragment),u(e,"class","chart svelte-etmurc")},m(o,_){j(o,e,_),X(n,e,null),y(e,t),X(a,e,null),s=!0},p(o,_){const i={};_&16384&&(i.value=o[14]),_&16&&(i.y=o[4]),_&32&&(i.x=o[5]),_&512&&(i.colors=o[9]),a.$set(i)},i(o){s||(B(n.$$.fragment,o),B(a.$$.fragment,o),s=!0)},o(o){S(n.$$.fragment,o),S(a.$$.fragment,o),s=!1},d(o){o&&E(e),q(n),q(a)}}}function Ke(l){let e,n,t,a;const s=[xe,Qe],o=[];function _(i,v){return i[15]?0:1}return e=_(l),n=o[e]=s[e](l),{c(){n.c(),t=Z()},m(i,v){o[e].m(i,v),j(i,t,v),a=!0},p(i,v){let g=e;e=_(i),e===g?o[e].p(i,v):(pe(),S(o[g],1,1,()=>{o[g]=null}),ke(),n=o[e],n?n.p(i,v):(n=o[e]=s[e](i),n.c()),B(n,1),n.m(t.parentNode,t))},i(i){a||(B(n),a=!0)},o(i){S(n),a=!1},d(i){i&&E(t),o[e].d(i)}}}function We(l){let e,n;return e=new He({props:{type:"csv"}}),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},p:P,i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function Qe(l){let e,n;return e=new Oe({props:{unpadded_box:!0,size:"large",$$slots:{default:[$e]},$$scope:{ctx:l}}}),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},p(t,a){const s={};a&8388608&&(s.$$scope={dirty:a,ctx:t}),e.$set(s)},i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function xe(l){let e,n;return e=new we({props:{value:l[15],colors:l[9]}}),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},p(t,a){const s={};a&32768&&(s.value=t[15]),a&512&&(s.colors=t[9]),e.$set(s)},i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function $e(l){let e,n;return e=new ye({}),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function el(l){let e,n,t,a,s,o,_,i;e=new Fe({props:{show_label:l[8],Icon:ye,label:l[7]||"TimeSeries"}});const v=[l[13]];let g={};for(let m=0;m{h[k]=null}),ke()),~s?(o=h[s],o?o.p(m,b):(o=h[s]=f[s](m),o.c()),B(o,1),o.m(_.parentNode,_)):o=null)},i(m){i||(B(e.$$.fragment,m),B(t.$$.fragment,m),B(o),i=!0)},o(m){S(e.$$.fragment,m),S(t.$$.fragment,m),S(o),i=!1},d(m){m&&(E(n),E(a),E(_)),q(e,m),q(t,m),~s&&h[s].d(m)}}}function ll(l){let e,n;return e=new Ne({props:{visible:l[3],variant:l[6]==="dynamic"&&!l[14]?"dashed":"solid",padding:!1,elem_id:l[1],elem_classes:l[2],container:l[10],scale:l[11],min_width:l[12],$$slots:{default:[el]},$$scope:{ctx:l}}}),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},p(t,[a]){const s={};a&8&&(s.visible=t[3]),a&16448&&(s.variant=t[6]==="dynamic"&&!t[14]?"dashed":"solid"),a&2&&(s.elem_id=t[1]),a&4&&(s.elem_classes=t[2]),a&1024&&(s.container=t[10]),a&2048&&(s.scale=t[11]),a&4096&&(s.min_width=t[12]),a&8446961&&(s.$$scope={dirty:a,ctx:t}),e.$set(s)},i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function tl(l){return l.data.map(e=>e.reduce((n,t,a)=>({...n,[l.headers[a]]:t}),{}))}function nl(l){const e=atob(l.split(",")[1]),n=l.split(",")[0].split(":")[1].split(";")[0],t=new ArrayBuffer(e.length),a=new Uint8Array(t);for(let s=0;sn.push(a));for(let a=0;as.push(o[a].y)),t.push(s)}return{headers:n,data:t}}function ol(l,e,n){let t;const a=be();let{elem_id:s=""}=e,{elem_classes:o=[]}=e,{visible:_=!0}=e,{value:i}=e,{y:v}=e,{x:g}=e,{mode:f}=e,{label:h}=e,{show_label:A}=e,{colors:m}=e,{container:b=!0}=e,{scale:p=null}=e,{min_width:N=void 0}=e,{loading_status:k}=e,d;function C(r){const w=new FileReader;w.addEventListener("loadend",W=>{n(14,d=W.srcElement.result)}),w.readAsText(r)}function L(r){r.headers&&n(14,d=r.headers.join(",")),r.data.forEach(W=>{n(14,d=d+` -`),n(14,d=d+W.join(","))})}function H(r){return n(0,i={data:r}),r}function M({detail:r}){n(0,i=null),a("change"),a("clear")}const c=({detail:{x:r,y:w}})=>n(0,i=al(r,w)),z=({detail:r})=>H(r);return l.$$set=r=>{"elem_id"in r&&n(1,s=r.elem_id),"elem_classes"in r&&n(2,o=r.elem_classes),"visible"in r&&n(3,_=r.visible),"value"in r&&n(0,i=r.value),"y"in r&&n(4,v=r.y),"x"in r&&n(5,g=r.x),"mode"in r&&n(6,f=r.mode),"label"in r&&n(7,h=r.label),"show_label"in r&&n(8,A=r.show_label),"colors"in r&&n(9,m=r.colors),"container"in r&&n(10,b=r.container),"scale"in r&&n(11,p=r.scale),"min_width"in r&&n(12,N=r.min_width),"loading_status"in r&&n(13,k=r.loading_status)},l.$$.update=()=>{l.$$.dirty&1&&(i&&i.data&&typeof i.data=="string"?i?C(nl(i.data)):n(14,d=null):i&&i.data&&typeof i.data!="string"&&(i||n(14,d=null),L(i))),l.$$.dirty&16385&&n(14,d=i==null?null:d),l.$$.dirty&65&&n(15,t=f==="static"&&i&&tl(i)),l.$$.dirty&1&&a("change")},[i,s,o,_,v,g,f,h,A,m,b,p,N,k,d,t,H,M,c,z]}class sl extends I{constructor(e){super(),J(this,e,ol,ll,K,{elem_id:1,elem_classes:2,visible:3,value:0,y:4,x:5,mode:6,label:7,show_label:8,colors:9,container:10,scale:11,min_width:12,loading_status:13})}}const wl=sl,Ll=["static","dynamic"],jl=l=>({type:{payload:"{data: Array> | string; headers?: Array;}"},description:{payload:"dataset of series"}});export{wl as Component,jl as document,Ll as modes}; -//# sourceMappingURL=index-ff630227.js.map diff --git a/spaces/DataScienceGuild/ARIMA_test/README.md b/spaces/DataScienceGuild/ARIMA_test/README.md deleted file mode 100644 index 4f31bf941a3542f1c0fb652b9b3bd22e3c64b4c4..0000000000000000000000000000000000000000 --- a/spaces/DataScienceGuild/ARIMA_test/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ARIMA Test -emoji: 🌖 -colorFrom: green -colorTo: pink -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/swg_transformer.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/swg_transformer.py deleted file mode 100644 index aa368e3616058b30419cc6249862a816f7252fed..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/swg_transformer.py +++ /dev/null @@ -1,49 +0,0 @@ -from models.modules.transformer_modules import * - - -class SWG_Transformer(nn.Module): - def __init__(self, dim, depth, heads, win_size, dim_head, mlp_dim, - dropout=0., patch_num=None, ape=None, rpe=None, rpe_pos=1): - super().__init__() - self.absolute_pos_embed = None if patch_num is None or ape is None else AbsolutePosition(dim, dropout, - patch_num, ape) - self.pos_dropout = nn.Dropout(dropout) - self.layers = nn.ModuleList([]) - for i in range(depth): - if i % 2 == 0: - attention = WinAttention(dim, win_size=win_size, shift=0 if (i % 3 == 0) else win_size // 2, - heads=heads, dim_head=dim_head, dropout=dropout, rpe=rpe, rpe_pos=rpe_pos) - else: - attention = Attention(dim, heads=heads, dim_head=dim_head, dropout=dropout, - patch_num=patch_num, rpe=rpe, rpe_pos=rpe_pos) - - self.layers.append(nn.ModuleList([ - PreNorm(dim, attention), - PreNorm(dim, FeedForward(dim, mlp_dim, dropout=dropout)), - ])) - - def forward(self, x): - if self.absolute_pos_embed is not None: - x = self.absolute_pos_embed(x) - x = self.pos_dropout(x) - for attn, ff in self.layers: - x = attn(x) + x - x = ff(x) + x - return x - - -if __name__ == '__main__': - token_dim = 1024 - toke_len = 256 - - transformer = SWG_Transformer(dim=token_dim, - depth=6, - heads=16, - win_size=8, - dim_head=64, - mlp_dim=2048, - dropout=0.1) - - input = torch.randn(1, toke_len, token_dim) - output = transformer(input) - print(output.shape) diff --git a/spaces/Dorado607/ChuanhuChatGPT/run_Linux.sh b/spaces/Dorado607/ChuanhuChatGPT/run_Linux.sh deleted file mode 100644 index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000 --- a/spaces/Dorado607/ChuanhuChatGPT/run_Linux.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$(readlink -f "$0")") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" || exit - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi - -# 检查ChuanhuChatbot.py是否在运行 -if ! pgrep -f ChuanhuChatbot.py > /dev/null; then - # 如果没有运行,启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/__init__.py deleted file mode 100644 index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/DragGan/DragGan-Inversion/PTI/training/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/training/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/README.md b/spaces/EAraid12/LoRA-DreamBooth-Training-UI/README.md deleted file mode 100644 index b61f96a3f0f5df541bd4e0dfba3a468ceb1c54e9..0000000000000000000000000000000000000000 --- a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: LoRA DreamBooth Training UI -emoji: ⚡ -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.16.2 -python_version: 3.10.9 -app_file: app.py -pinned: false -license: mit -duplicated_from: lora-library/LoRA-DreamBooth-Training-UI ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/models/realesrgan_model.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/models/realesrgan_model.py deleted file mode 100644 index c298a09c42433177f90001a0a31d029576072ccd..0000000000000000000000000000000000000000 --- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/models/realesrgan_model.py +++ /dev/null @@ -1,258 +0,0 @@ -import numpy as np -import random -import torch -from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt -from basicsr.data.transforms import paired_random_crop -from basicsr.models.srgan_model import SRGANModel -from basicsr.utils import DiffJPEG, USMSharp -from basicsr.utils.img_process_util import filter2D -from basicsr.utils.registry import MODEL_REGISTRY -from collections import OrderedDict -from torch.nn import functional as F - - -@MODEL_REGISTRY.register() -class RealESRGANModel(SRGANModel): - """RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - It mainly performs: - 1. randomly synthesize LQ images in GPU tensors - 2. optimize the networks with GAN training. - """ - - def __init__(self, opt): - super(RealESRGANModel, self).__init__(opt) - self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts - self.usm_sharpener = USMSharp().cuda() # do usm sharpening - self.queue_size = opt.get('queue_size', 180) - - @torch.no_grad() - def _dequeue_and_enqueue(self): - """It is the training pair pool for increasing the diversity in a batch. - - Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a - batch could not have different resize scaling factors. Therefore, we employ this training pair pool - to increase the degradation diversity in a batch. - """ - # initialize - b, c, h, w = self.lq.size() - if not hasattr(self, 'queue_lr'): - assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}' - self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda() - _, c, h, w = self.gt.size() - self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda() - self.queue_ptr = 0 - if self.queue_ptr == self.queue_size: # the pool is full - # do dequeue and enqueue - # shuffle - idx = torch.randperm(self.queue_size) - self.queue_lr = self.queue_lr[idx] - self.queue_gt = self.queue_gt[idx] - # get first b samples - lq_dequeue = self.queue_lr[0:b, :, :, :].clone() - gt_dequeue = self.queue_gt[0:b, :, :, :].clone() - # update the queue - self.queue_lr[0:b, :, :, :] = self.lq.clone() - self.queue_gt[0:b, :, :, :] = self.gt.clone() - - self.lq = lq_dequeue - self.gt = gt_dequeue - else: - # only do enqueue - self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone() - self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone() - self.queue_ptr = self.queue_ptr + b - - @torch.no_grad() - def feed_data(self, data): - """Accept data from dataloader, and then add two-order degradations to obtain LQ images. - """ - if self.is_train and self.opt.get('high_order_degradation', True): - # training data synthesis - self.gt = data['gt'].to(self.device) - self.gt_usm = self.usm_sharpener(self.gt) - - self.kernel1 = data['kernel1'].to(self.device) - self.kernel2 = data['kernel2'].to(self.device) - self.sinc_kernel = data['sinc_kernel'].to(self.device) - - ori_h, ori_w = self.gt.size()[2:4] - - # ----------------------- The first degradation process ----------------------- # - # blur - out = filter2D(self.gt_usm, self.kernel1) - # random resize - updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0] - if updown_type == 'up': - scale = np.random.uniform(1, self.opt['resize_range'][1]) - elif updown_type == 'down': - scale = np.random.uniform(self.opt['resize_range'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, scale_factor=scale, mode=mode) - # add noise - gray_noise_prob = self.opt['gray_noise_prob'] - if np.random.uniform() < self.opt['gaussian_noise_prob']: - out = random_add_gaussian_noise_pt( - out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.opt['poisson_scale_range'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range']) - out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts - out = self.jpeger(out, quality=jpeg_p) - - # ----------------------- The second degradation process ----------------------- # - # blur - if np.random.uniform() < self.opt['second_blur_prob']: - out = filter2D(out, self.kernel2) - # random resize - updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0] - if updown_type == 'up': - scale = np.random.uniform(1, self.opt['resize_range2'][1]) - elif updown_type == 'down': - scale = np.random.uniform(self.opt['resize_range2'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate( - out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode) - # add noise - gray_noise_prob = self.opt['gray_noise_prob2'] - if np.random.uniform() < self.opt['gaussian_noise_prob2']: - out = random_add_gaussian_noise_pt( - out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.opt['poisson_scale_range2'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - - # JPEG compression + the final sinc filter - # We also need to resize images to desired sizes. We group [resize back + sinc filter] together - # as one operation. - # We consider two orders: - # 1. [resize back + sinc filter] + JPEG compression - # 2. JPEG compression + [resize back + sinc filter] - # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines. - if np.random.uniform() < 0.5: - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode) - out = filter2D(out, self.sinc_kernel) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = self.jpeger(out, quality=jpeg_p) - else: - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = self.jpeger(out, quality=jpeg_p) - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode) - out = filter2D(out, self.sinc_kernel) - - # clamp and round - self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255. - - # random crop - gt_size = self.opt['gt_size'] - (self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size, - self.opt['scale']) - - # training pair pool - self._dequeue_and_enqueue() - # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue - self.gt_usm = self.usm_sharpener(self.gt) - self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract - else: - # for paired training or validation - self.lq = data['lq'].to(self.device) - if 'gt' in data: - self.gt = data['gt'].to(self.device) - self.gt_usm = self.usm_sharpener(self.gt) - - def nondist_validation(self, dataloader, current_iter, tb_logger, save_img): - # do not use the synthetic process during validation - self.is_train = False - super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img) - self.is_train = True - - def optimize_parameters(self, current_iter): - # usm sharpening - l1_gt = self.gt_usm - percep_gt = self.gt_usm - gan_gt = self.gt_usm - if self.opt['l1_gt_usm'] is False: - l1_gt = self.gt - if self.opt['percep_gt_usm'] is False: - percep_gt = self.gt - if self.opt['gan_gt_usm'] is False: - gan_gt = self.gt - - # optimize net_g - for p in self.net_d.parameters(): - p.requires_grad = False - - self.optimizer_g.zero_grad() - self.output = self.net_g(self.lq) - - l_g_total = 0 - loss_dict = OrderedDict() - if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters): - # pixel loss - if self.cri_pix: - l_g_pix = self.cri_pix(self.output, l1_gt) - l_g_total += l_g_pix - loss_dict['l_g_pix'] = l_g_pix - # perceptual loss - if self.cri_perceptual: - l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt) - if l_g_percep is not None: - l_g_total += l_g_percep - loss_dict['l_g_percep'] = l_g_percep - if l_g_style is not None: - l_g_total += l_g_style - loss_dict['l_g_style'] = l_g_style - # gan loss - fake_g_pred = self.net_d(self.output) - l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False) - l_g_total += l_g_gan - loss_dict['l_g_gan'] = l_g_gan - - l_g_total.backward() - self.optimizer_g.step() - - # optimize net_d - for p in self.net_d.parameters(): - p.requires_grad = True - - self.optimizer_d.zero_grad() - # real - real_d_pred = self.net_d(gan_gt) - l_d_real = self.cri_gan(real_d_pred, True, is_disc=True) - loss_dict['l_d_real'] = l_d_real - loss_dict['out_d_real'] = torch.mean(real_d_pred.detach()) - l_d_real.backward() - # fake - fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9 - l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True) - loss_dict['l_d_fake'] = l_d_fake - loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach()) - l_d_fake.backward() - self.optimizer_d.step() - - if self.ema_decay > 0: - self.model_ema(decay=self.ema_decay) - - self.log_dict = self.reduce_loss_dict(loss_dict) diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers_new.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers_new.py deleted file mode 100644 index 44153b6a23399c6938affc61c71919eaa172bcee..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers_new.py +++ /dev/null @@ -1,125 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, stride, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ) - - def __call__(self, x): - h = self.conv1(x) - h = self.conv2(h) - - return h - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - # self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - - h = self.conv1(x) - # h = self.conv2(h) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 12), activ=nn.ReLU, dropout=False): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ) - self.conv3 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = Conv2DBNActiv(nout * 5, nout, 1, 1, 0, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - out = self.bottleneck(out) - - if self.dropout is not None: - out = self.dropout(out) - - return out - - -class LSTMModule(nn.Module): - def __init__(self, nin_conv, nin_lstm, nout_lstm): - super(LSTMModule, self).__init__() - self.conv = Conv2DBNActiv(nin_conv, 1, 1, 1, 0) - self.lstm = nn.LSTM( - input_size=nin_lstm, hidden_size=nout_lstm // 2, bidirectional=True - ) - self.dense = nn.Sequential( - nn.Linear(nout_lstm, nin_lstm), nn.BatchNorm1d(nin_lstm), nn.ReLU() - ) - - def forward(self, x): - N, _, nbins, nframes = x.size() - h = self.conv(x)[:, 0] # N, nbins, nframes - h = h.permute(2, 0, 1) # nframes, N, nbins - h, _ = self.lstm(h) - h = self.dense(h.reshape(-1, h.size()[-1])) # nframes * N, nbins - h = h.reshape(nframes, N, 1, nbins) - h = h.permute(1, 2, 3, 0) - - return h diff --git a/spaces/EronSamez/RVC_HFmeu/demucs/audio.py b/spaces/EronSamez/RVC_HFmeu/demucs/audio.py deleted file mode 100644 index b29f156e4afb5fbda32c35777022caeadf50d711..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/demucs/audio.py +++ /dev/null @@ -1,172 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -import json -import subprocess as sp -from pathlib import Path - -import julius -import numpy as np -import torch - -from .utils import temp_filenames - - -def _read_info(path): - stdout_data = sp.check_output([ - 'ffprobe', "-loglevel", "panic", - str(path), '-print_format', 'json', '-show_format', '-show_streams' - ]) - return json.loads(stdout_data.decode('utf-8')) - - -class AudioFile: - """ - Allows to read audio from any format supported by ffmpeg, as well as resampling or - converting to mono on the fly. See :method:`read` for more details. - """ - def __init__(self, path: Path): - self.path = Path(path) - self._info = None - - def __repr__(self): - features = [("path", self.path)] - features.append(("samplerate", self.samplerate())) - features.append(("channels", self.channels())) - features.append(("streams", len(self))) - features_str = ", ".join(f"{name}={value}" for name, value in features) - return f"AudioFile({features_str})" - - @property - def info(self): - if self._info is None: - self._info = _read_info(self.path) - return self._info - - @property - def duration(self): - return float(self.info['format']['duration']) - - @property - def _audio_streams(self): - return [ - index for index, stream in enumerate(self.info["streams"]) - if stream["codec_type"] == "audio" - ] - - def __len__(self): - return len(self._audio_streams) - - def channels(self, stream=0): - return int(self.info['streams'][self._audio_streams[stream]]['channels']) - - def samplerate(self, stream=0): - return int(self.info['streams'][self._audio_streams[stream]]['sample_rate']) - - def read(self, - seek_time=None, - duration=None, - streams=slice(None), - samplerate=None, - channels=None, - temp_folder=None): - """ - Slightly more efficient implementation than stempeg, - in particular, this will extract all stems at once - rather than having to loop over one file multiple times - for each stream. - - Args: - seek_time (float): seek time in seconds or None if no seeking is needed. - duration (float): duration in seconds to extract or None to extract until the end. - streams (slice, int or list): streams to extract, can be a single int, a list or - a slice. If it is a slice or list, the output will be of size [S, C, T] - with S the number of streams, C the number of channels and T the number of samples. - If it is an int, the output will be [C, T]. - samplerate (int): if provided, will resample on the fly. If None, no resampling will - be done. Original sampling rate can be obtained with :method:`samplerate`. - channels (int): if 1, will convert to mono. We do not rely on ffmpeg for that - as ffmpeg automatically scale by +3dB to conserve volume when playing on speakers. - See https://sound.stackexchange.com/a/42710. - Our definition of mono is simply the average of the two channels. Any other - value will be ignored. - temp_folder (str or Path or None): temporary folder to use for decoding. - - - """ - streams = np.array(range(len(self)))[streams] - single = not isinstance(streams, np.ndarray) - if single: - streams = [streams] - - if duration is None: - target_size = None - query_duration = None - else: - target_size = int((samplerate or self.samplerate()) * duration) - query_duration = float((target_size + 1) / (samplerate or self.samplerate())) - - with temp_filenames(len(streams)) as filenames: - command = ['ffmpeg', '-y'] - command += ['-loglevel', 'panic'] - if seek_time: - command += ['-ss', str(seek_time)] - command += ['-i', str(self.path)] - for stream, filename in zip(streams, filenames): - command += ['-map', f'0:{self._audio_streams[stream]}'] - if query_duration is not None: - command += ['-t', str(query_duration)] - command += ['-threads', '1'] - command += ['-f', 'f32le'] - if samplerate is not None: - command += ['-ar', str(samplerate)] - command += [filename] - - sp.run(command, check=True) - wavs = [] - for filename in filenames: - wav = np.fromfile(filename, dtype=np.float32) - wav = torch.from_numpy(wav) - wav = wav.view(-1, self.channels()).t() - if channels is not None: - wav = convert_audio_channels(wav, channels) - if target_size is not None: - wav = wav[..., :target_size] - wavs.append(wav) - wav = torch.stack(wavs, dim=0) - if single: - wav = wav[0] - return wav - - -def convert_audio_channels(wav, channels=2): - """Convert audio to the given number of channels.""" - *shape, src_channels, length = wav.shape - if src_channels == channels: - pass - elif channels == 1: - # Case 1: - # The caller asked 1-channel audio, but the stream have multiple - # channels, downmix all channels. - wav = wav.mean(dim=-2, keepdim=True) - elif src_channels == 1: - # Case 2: - # The caller asked for multiple channels, but the input file have - # one single channel, replicate the audio over all channels. - wav = wav.expand(*shape, channels, length) - elif src_channels >= channels: - # Case 3: - # The caller asked for multiple channels, and the input file have - # more channels than requested. In that case return the first channels. - wav = wav[..., :channels, :] - else: - # Case 4: What is a reasonable choice here? - raise ValueError('The audio file has less channels than requested but is not mono.') - return wav - - -def convert_audio(wav, from_samplerate, to_samplerate, channels): - wav = convert_audio_channels(wav, channels) - return julius.resample_frac(wav, from_samplerate, to_samplerate) diff --git a/spaces/EuroPython2022/pyro-vision/README.md b/spaces/EuroPython2022/pyro-vision/README.md deleted file mode 100644 index c7e4d5bc1a24491d4d007000c316cb8fd77d14be..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/pyro-vision/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PyroVision -emoji: 🔥 -colorFrom: green -colorTo: brown -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FYP-23-S1-21/Refineverse_Plugin/static/ProjectBreakdown.css b/spaces/FYP-23-S1-21/Refineverse_Plugin/static/ProjectBreakdown.css deleted file mode 100644 index 25092d21a6ff0bf16f386ad715cd4b35b1542782..0000000000000000000000000000000000000000 --- a/spaces/FYP-23-S1-21/Refineverse_Plugin/static/ProjectBreakdown.css +++ /dev/null @@ -1,190 +0,0 @@ -body{ - background-image:url("../static/Images/Background.jpg"); - background-repeat: no-repeat; -background-size: cover; -} -* { - margin: 0; - padding: 0; - box-sizing: border-box; -} - -header { - display: flex; - align-items: center; - zoom: 130%; - padding: 15px; -} - -header img { - /* Original width & height is 70px */ - width: 200px; - height: 200px; - margin-left: 300px; -} - -header h1 { - margin-left: 50px; - font-size: 40px; - color: rgb(26, 25, 25); -} - -main { - display: flex; - justify-content: space-between; - margin-top: 20px; - margin-bottom: 20px; -} - -.user-story { - flex-basis: 30%; - margin-left: 50px; -} - -.user-story h2 { - margin-bottom: 10px; -} - -textarea { - width: 900px; - height: 400px; - padding: 10px; - border: 1px solid #ccc; - border-radius: 5px; - resize: none !important; - margin-bottom: 10px; - -} - -table { - flex-basis: 60%; - margin-right: 20px; - border: 1px solid #ccc; - border-radius: 5px; - overflow-y: scroll; /* To make the table scrollable */ - height: 200px; /* To match text area size */ - display: block; - border-collapse: separate; /* Added to separate the border between the headers and the data cells */ - border-spacing: 0; /* Added to remove the extra space between the border */ -} - -#breakdown-table { - width: 900px; - height: 400px; - margin-left: 20px; -} - -#breakdown-table th, -#breakdown-table td { - border: 1px solid #ddd; - padding: 8px; - text-align: left; -} - -#breakdown-table th:first-child { - border-left: none; -} - -#breakdown-table th:last-child { - border-right: none; -} - -#breakdown-table th:not(:first-child) { - border-left: none; - border-right: none; -} - -#breakdown-table th div { - border-bottom: 1px solid #ddd; - padding: 8px; -} - -#breakdown-table td div { - padding: 8px; -} - -#breakdown-table thead th { - background-color: #f2f2f2; -} - -#breakdown-table tbody tr:nth-child(even) { - background-color: #f2f2f2; -} - -#breakdown-table tbody tr:hover { - background-color: #ddd; -} - - -#clear-btn { - background-color: #d3d5d6; - color: rgb(32, 31, 31); - border: 2px; - border-radius: 5px; - padding: 10px; - cursor: pointer; -} - -#breakdown-btn { - background-color: #2f3030; - color: white; - border: none; - border-radius: 5px; - padding: 10px; - cursor: pointer; - /* Added these 2 lines to make button appear at btm-right of user story contents. - Not sure if this is the correct way to do it, but it looks alright for me? */ - position: absolute; - left: 1750px; -} - -.user-story-list { - flex-basis: 60%; - margin-right: 20px; -} - -.user-story-list h2 { - margin-bottom: 10px; -} - -.scrollable-box { - height: 200px; - overflow-y: auto; - border: 1px solid #ccc; - border-radius: 5px; - resize: none; -} - -#user-story-ul { - list-style: none; - padding: 10px; -} - -.back-Btn-Container { - display: flex; - justify-content: end; - align-items: end; - padding: 0 20px; - margin-top: 20px; -} - -.buttons-container { - display: flex; - justify-content: space-between; - align-items: center; - padding: 0 20px; - margin-top: 20px; -} - -.back-btn { - background-color: #555; - color: #fff; - padding: 10px 20px; - border: none; - border-radius: 5px; - font-size: 16px; - cursor: pointer; - margin-right: 150px; - width: 110px; - height: 40px; -} \ No newline at end of file diff --git a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/commons.py b/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/Fatima990/text_generator1/app.py b/spaces/Fatima990/text_generator1/app.py deleted file mode 100644 index f1d4beb0a8f3cee27903f527b6bf8daa485a75a0..0000000000000000000000000000000000000000 --- a/spaces/Fatima990/text_generator1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("huggingface/gpt2").launch() \ No newline at end of file diff --git a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/text/sanskrit.py b/spaces/FrankZxShen/vits-fast-finetuning-umamusume/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/GeorgeOrville/bingo/src/components/chat-history.tsx b/spaces/GeorgeOrville/bingo/src/components/chat-history.tsx deleted file mode 100644 index feb81de66562edda8f40d3c0cc717202c92b6509..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/src/components/chat-history.tsx +++ /dev/null @@ -1,48 +0,0 @@ -import { IconEdit, IconTrash, IconMore, IconDownload } from "./ui/icons" - -export function ChatHistory() { - return ( -
      -
      - 历史记录 -
      -
      -
      -
      -
      -
      -
      - -
      -

      无标题的聊天

      -
      -

      上午1:42

      -
      - - - - - - - - -
      -
      -
      -
      -
      -
      -
      -
      - ) -} diff --git a/spaces/Gigabot/ostris-ikea-instructions-lora-sdxl/app.py b/spaces/Gigabot/ostris-ikea-instructions-lora-sdxl/app.py deleted file mode 100644 index 1d6c504f95564cc6ee4e570f16198f96378d0a09..0000000000000000000000000000000000000000 --- a/spaces/Gigabot/ostris-ikea-instructions-lora-sdxl/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/ostris/ikea-instructions-lora-sdxl").launch() \ No newline at end of file diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/CONTRIBUTING.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/CONTRIBUTING.md deleted file mode 100644 index 75990c2ce7545b72fb6ebad8295ca4895f437205..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/CONTRIBUTING.md +++ /dev/null @@ -1,44 +0,0 @@ -# Contributing to Real-ESRGAN - -:art: Real-ESRGAN needs your contributions. Any contributions are welcome, such as new features/models/typo fixes/suggestions/maintenance, *etc*. See [CONTRIBUTING.md](docs/CONTRIBUTING.md). All contributors are list [here](README.md#hugs-acknowledgement). - -We like open-source and want to develop practical algorithms for general image restoration. However, individual strength is limited. So, any kinds of contributions are welcome, such as: - -- New features -- New models (your fine-tuned models) -- Bug fixes -- Typo fixes -- Suggestions -- Maintenance -- Documents -- *etc* - -## Workflow - -1. Fork and pull the latest Real-ESRGAN repository -1. Checkout a new branch (do not use master branch for PRs) -1. Commit your changes -1. Create a PR - -**Note**: - -1. Please check the code style and linting - 1. The style configuration is specified in [setup.cfg](setup.cfg) - 1. If you use VSCode, the settings are configured in [.vscode/settings.json](.vscode/settings.json) -1. Strongly recommend using `pre-commit hook`. It will check your code style and linting before your commit. - 1. In the root path of project folder, run `pre-commit install` - 1. The pre-commit configuration is listed in [.pre-commit-config.yaml](.pre-commit-config.yaml) -1. Better to [open a discussion](https://github.com/xinntao/Real-ESRGAN/discussions) before large changes. - 1. Welcome to discuss :sunglasses:. I will try my best to join the discussion. - -## TODO List - -:zero: The most straightforward way of improving model performance is to fine-tune on some specific datasets. - -Here are some TODOs: - -- [ ] optimize for human faces -- [ ] optimize for texts -- [ ] support controllable restoration strength - -:one: There are also [several issues](https://github.com/xinntao/Real-ESRGAN/issues) that require helpers to improve. If you can help, please let me know :smile: diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/Training_CN.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/Training_CN.md deleted file mode 100644 index dabc3c5d97e134a2d551157c2dd03a629ec661bc..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/Training_CN.md +++ /dev/null @@ -1,271 +0,0 @@ -# :computer: 如何训练/微调 Real-ESRGAN - -- [训练 Real-ESRGAN](#训练-real-esrgan) - - [概述](#概述) - - [准备数据集](#准备数据集) - - [训练 Real-ESRNet 模型](#训练-real-esrnet-模型) - - [训练 Real-ESRGAN 模型](#训练-real-esrgan-模型) -- [用自己的数据集微调 Real-ESRGAN](#用自己的数据集微调-real-esrgan) - - [动态生成降级图像](#动态生成降级图像) - - [使用已配对的数据](#使用已配对的数据) - -[English](Training.md) **|** [简体中文](Training_CN.md) - -## 训练 Real-ESRGAN - -### 概述 - -训练分为两个步骤。除了 loss 函数外,这两个步骤拥有相同数据合成以及训练的一条龙流程。具体点说: - -1. 首先使用 L1 loss 训练 Real-ESRNet 模型,其中 L1 loss 来自预先训练的 ESRGAN 模型。 - -2. 然后我们将 Real-ESRNet 模型作为生成器初始化,结合L1 loss、感知 loss、GAN loss 三者的参数对 Real-ESRGAN 进行训练。 - -### 准备数据集 - -我们使用 DF2K ( DIV2K 和 Flickr2K ) + OST 数据集进行训练。只需要HR图像!
      -下面是网站链接: -1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip -2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar -3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip - -以下是数据的准备步骤。 - -#### 第1步:【可选】生成多尺寸图片 - -针对 DF2K 数据集,我们使用多尺寸缩放策略,*换言之*,我们对 HR 图像进行下采样,就能获得多尺寸的标准参考(Ground-Truth)图像。
      -您可以使用这个 [scripts/generate_multiscale_DF2K.py](scripts/generate_multiscale_DF2K.py) 脚本快速生成多尺寸的图像。
      -注意:如果您只想简单试试,那么可以跳过此步骤。 - -```bash -python scripts/generate_multiscale_DF2K.py --input datasets/DF2K/DF2K_HR --output datasets/DF2K/DF2K_multiscale -``` - -#### 第2步:【可选】裁切为子图像 - -我们可以将 DF2K 图像裁切为子图像,以加快 IO 和处理速度。
      -如果你的 IO 够好或储存空间有限,那么此步骤是可选的。
      - -您可以使用脚本 [scripts/extract_subimages.py](scripts/extract_subimages.py)。这是使用示例: - -```bash - python scripts/extract_subimages.py --input datasets/DF2K/DF2K_multiscale --output datasets/DF2K/DF2K_multiscale_sub --crop_size 400 --step 200 -``` - -#### 第3步:准备元信息 txt - -您需要准备一个包含图像路径的 txt 文件。下面是 `meta_info_DF2Kmultiscale+OST_sub.txt` 中的部分展示(由于各个用户可能有截然不同的子图像划分,这个文件不适合你的需求,你得准备自己的 txt 文件): - -```txt -DF2K_HR_sub/000001_s001.png -DF2K_HR_sub/000001_s002.png -DF2K_HR_sub/000001_s003.png -... -``` - -你可以使用该脚本 [scripts/generate_meta_info.py](scripts/generate_meta_info.py) 生成包含图像路径的 txt 文件。
      -你还可以合并多个文件夹的图像路径到一个元信息(meta_info)txt。这是使用示例: - -```bash - python scripts/generate_meta_info.py --input datasets/DF2K/DF2K_HR, datasets/DF2K/DF2K_multiscale --root datasets/DF2K, datasets/DF2K --meta_info datasets/DF2K/meta_info/meta_info_DF2Kmultiscale.txt -``` - -### 训练 Real-ESRNet 模型 - -1. 下载预先训练的模型 [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth),放到 `experiments/pretrained_models`目录下。 - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models - ``` -2. 相应地修改选项文件 `options/train_realesrnet_x4plus.yml` 中的内容: - ```yml - train: - name: DF2K+OST - type: RealESRGANDataset - dataroot_gt: datasets/DF2K # 修改为你的数据集文件夹根目录 - meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # 修改为你自己生成的元信息txt - io_backend: - type: disk - ``` -3. 如果你想在训练过程中执行验证,就取消注释这些内容并进行相应的修改: - ```yml - # 取消注释这些以进行验证 - # val: - # name: validation - # type: PairedImageDataset - # dataroot_gt: path_to_gt - # dataroot_lq: path_to_lq - # io_backend: - # type: disk - - ... - - # 取消注释这些以进行验证 - # 验证设置 - # val: - # val_freq: !!float 5e3 - # save_img: True - - # metrics: - # psnr: # 指标名称,可以是任意的 - # type: calculate_psnr - # crop_border: 4 - # test_y_channel: false - ``` -4. 正式训练之前,你可以用 `--debug` 模式检查是否正常运行。我们用了4个GPU进行训练: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug - ``` - - 用 **1个GPU** 训练的 debug 模式示例: - ```bash - python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --debug - ``` -5. 正式训练开始。我们用了4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume - ``` - - 用 **1个GPU** 训练: - ```bash - python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --auto_resume - ``` - -### 训练 Real-ESRGAN 模型 - -1. 训练 Real-ESRNet 模型后,您得到了这个 `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth` 文件。如果需要指定预训练路径到其他文件,请修改选项文件 `train_realesrgan_x4plus.yml` 中 `pretrain_network_g` 的值。 -1. 修改选项文件 `train_realesrgan_x4plus.yml` 的内容。大多数修改与上节提到的类似。 -1. 正式训练之前,你可以以 `--debug` 模式检查是否正常运行。我们使用了4个GPU进行训练: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug - ``` - - 用 **1个GPU** 训练的 debug 模式示例: - ```bash - python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --debug - ``` -1. 正式训练开始。我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume - ``` - - 用 **1个GPU** 训练: - ```bash - python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --auto_resume - ``` - -## 用自己的数据集微调 Real-ESRGAN - -你可以用自己的数据集微调 Real-ESRGAN。一般地,微调(Fine-Tune)程序可以分为两种类型: - -1. [动态生成降级图像](#动态生成降级图像) -2. [使用**已配对**的数据](#使用已配对的数据) - -### 动态生成降级图像 - -只需要高分辨率图像。在训练过程中,使用 Real-ESRGAN 描述的降级模型生成低质量图像。 - -**1. 准备数据集** - -完整信息请参见[本节](#准备数据集)。 - -**2. 下载预训练模型** - -下载预先训练的模型到 `experiments/pretrained_models` 目录下。 - -- *RealESRGAN_x4plus.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models - ``` - -- *RealESRGAN_x4plus_netD.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models - ``` - -**3. 微调** - -修改选项文件 [options/finetune_realesrgan_x4plus.yml](options/finetune_realesrgan_x4plus.yml) ,特别是 `datasets` 部分: - -```yml -train: - name: DF2K+OST - type: RealESRGANDataset - dataroot_gt: datasets/DF2K # 修改为你的数据集文件夹根目录 - meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # 修改为你自己生成的元信息txt - io_backend: - type: disk -``` - -我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 \ -python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --launcher pytorch --auto_resume -``` - -用 **1个GPU** 训练: -```bash -python realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --auto_resume -``` - -### 使用已配对的数据 - -你还可以用自己已经配对的数据微调 RealESRGAN。这个过程更类似于微调 ESRGAN。 - -**1. 准备数据集** - -假设你已经有两个文件夹(folder): - -- **gt folder**(标准参考,高分辨率图像):*datasets/DF2K/DIV2K_train_HR_sub* -- **lq folder**(低质量,低分辨率图像):*datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub* - -然后,您可以使用脚本 [scripts/generate_meta_info_pairdata.py](scripts/generate_meta_info_pairdata.py) 生成元信息(meta_info)txt 文件。 - -```bash -python scripts/generate_meta_info_pairdata.py --input datasets/DF2K/DIV2K_train_HR_sub datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub --meta_info datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt -``` - -**2. 下载预训练模型** - -下载预先训练的模型到 `experiments/pretrained_models` 目录下。 - -- *RealESRGAN_x4plus.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models - ``` - -- *RealESRGAN_x4plus_netD.pth*: - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models - ``` - -**3. 微调** - -修改选项文件 [options/finetune_realesrgan_x4plus_pairdata.yml](options/finetune_realesrgan_x4plus_pairdata.yml) ,特别是 `datasets` 部分: - -```yml -train: - name: DIV2K - type: RealESRGANPairedDataset - dataroot_gt: datasets/DF2K # 修改为你的 gt folder 文件夹根目录 - dataroot_lq: datasets/DF2K # 修改为你的 lq folder 文件夹根目录 - meta_info: datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt # 修改为你自己生成的元信息txt - io_backend: - type: disk -``` - -我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。 - -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 \ -python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --launcher pytorch --auto_resume -``` - -用 **1个GPU** 训练: -```bash -python realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --auto_resume -``` diff --git a/spaces/Goutam982/RVC_V2_voice_clone/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/Goutam982/RVC_V2_voice_clone/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/audiogen.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/audiogen.py deleted file mode 100644 index 6adefb97401c10422c9711d222c0857f5593dceb..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/audiogen.py +++ /dev/null @@ -1,276 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Main model for using AudioGen. This will combine all the required components -and provide easy access to the generation API. -""" - -import typing as tp - -import torch - -from .encodec import CompressionModel -from .lm import LMModel -from .builders import get_debug_compression_model, get_debug_lm_model -from .loaders import load_compression_model, load_lm_model -from ..data.audio_utils import convert_audio -from ..modules.conditioners import ConditioningAttributes -from ..utils.autocast import TorchAutocast - - -class AudioGen: - """AudioGen main model with convenient generation API. - - Args: - name (str): name of the model. - compression_model (CompressionModel): Compression model - used to map audio to invertible discrete representations. - lm (LMModel): Language model over discrete representations. - max_duration (float, optional): maximum duration the model can produce, - otherwise, inferred from the training params. - """ - def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel, - max_duration: tp.Optional[float] = None): - self.name = name - self.compression_model = compression_model - self.lm = lm - if max_duration is None: - if hasattr(lm, 'cfg'): - max_duration = lm.cfg.dataset.segment_duration # type: ignore - else: - raise ValueError("You must provide max_duration when building directly AudioGen") - assert max_duration is not None - self.max_duration: float = max_duration - self.device = next(iter(lm.parameters())).device - self.generation_params: dict = {} - self.set_generation_params(duration=5) # 5 seconds by default - self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None - if self.device.type == 'cpu': - self.autocast = TorchAutocast(enabled=False) - else: - self.autocast = TorchAutocast( - enabled=True, device_type=self.device.type, dtype=torch.float16) - - @property - def frame_rate(self) -> float: - """Roughly the number of AR steps per seconds.""" - return self.compression_model.frame_rate - - @property - def sample_rate(self) -> int: - """Sample rate of the generated audio.""" - return self.compression_model.sample_rate - - @property - def audio_channels(self) -> int: - """Audio channels of the generated audio.""" - return self.compression_model.channels - - @staticmethod - def get_pretrained(name: str = 'facebook/audiogen-medium', device=None): - """Return pretrained model, we provide a single model for now: - - facebook/audiogen-medium (1.5B), text to sound, - # see: https://huggingface.co/facebook/audiogen-medium - """ - if device is None: - if torch.cuda.device_count(): - device = 'cuda' - else: - device = 'cpu' - - if name == 'debug': - # used only for unit tests - compression_model = get_debug_compression_model(device, sample_rate=16000) - lm = get_debug_lm_model(device) - return AudioGen(name, compression_model, lm, max_duration=10) - - compression_model = load_compression_model(name, device=device) - lm = load_lm_model(name, device=device) - assert 'self_wav' not in lm.condition_provider.conditioners, \ - "AudioGen do not support waveform conditioning for now" - return AudioGen(name, compression_model, lm) - - def set_generation_params(self, use_sampling: bool = True, top_k: int = 250, - top_p: float = 0.0, temperature: float = 1.0, - duration: float = 10.0, cfg_coef: float = 3.0, - two_step_cfg: bool = False, extend_stride: float = 2): - """Set the generation parameters for AudioGen. - - Args: - use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True. - top_k (int, optional): top_k used for sampling. Defaults to 250. - top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0. - temperature (float, optional): Softmax temperature parameter. Defaults to 1.0. - duration (float, optional): Duration of the generated waveform. Defaults to 10.0. - cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0. - two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance, - instead of batching together the two. This has some impact on how things - are padded but seems to have little impact in practice. - extend_stride: when doing extended generation (i.e. more than 10 seconds), by how much - should we extend the audio each time. Larger values will mean less context is - preserved, and shorter value will require extra computations. - """ - assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration." - self.extend_stride = extend_stride - self.duration = duration - self.generation_params = { - 'use_sampling': use_sampling, - 'temp': temperature, - 'top_k': top_k, - 'top_p': top_p, - 'cfg_coef': cfg_coef, - 'two_step_cfg': two_step_cfg, - } - - def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None): - """Override the default progress callback.""" - self._progress_callback = progress_callback - - def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text. - - Args: - descriptions (list of str): A list of strings used as text conditioning. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int, - descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None, - progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on audio prompts. - - Args: - prompt (torch.Tensor): A batch of waveforms used for continuation. - Prompt should be [B, C, T], or [C, T] if only one sample is generated. - prompt_sample_rate (int): Sampling rate of the given audio waveforms. - descriptions (list of str, optional): A list of strings used as text conditioning. Defaults to None. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if prompt.dim() == 2: - prompt = prompt[None] - if prompt.dim() != 3: - raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).") - prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels) - if descriptions is None: - descriptions = [None] * len(prompt) - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt) - assert prompt_tokens is not None - return self._generate_tokens(attributes, prompt_tokens, progress) - - @torch.no_grad() - def _prepare_tokens_and_attributes( - self, - descriptions: tp.Sequence[tp.Optional[str]], - prompt: tp.Optional[torch.Tensor], - ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]: - """Prepare model inputs. - - Args: - descriptions (list of str): A list of strings used as text conditioning. - prompt (torch.Tensor): A batch of waveforms used for continuation. - """ - attributes = [ - ConditioningAttributes(text={'description': description}) - for description in descriptions] - - if prompt is not None: - if descriptions is not None: - assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match" - prompt = prompt.to(self.device) - prompt_tokens, scale = self.compression_model.encode(prompt) - assert scale is None - else: - prompt_tokens = None - return attributes, prompt_tokens - - def _generate_tokens(self, attributes: tp.List[ConditioningAttributes], - prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor: - """Generate discrete audio tokens given audio prompt and/or conditions. - - Args: - attributes (list of ConditioningAttributes): Conditions used for generation (here text). - prompt_tokens (torch.Tensor, optional): Audio prompt used for continuation. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - Returns: - torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params. - """ - i = 0 - prompt_list = attributes[0].text['description'] - total_gen_len = int(self.duration * self.frame_rate) - max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate) - current_gen_offset: int = 0 - - def _progress_callback(generated_tokens: int, tokens_to_generate: int): - generated_tokens += current_gen_offset - if self._progress_callback is not None: - # Note that total_gen_len might be quite wrong depending on the - # codebook pattern used, but with delay it is almost accurate. - self._progress_callback(generated_tokens, total_gen_len) - else: - print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r') - - if prompt_tokens is not None: - assert max_prompt_len >= prompt_tokens.shape[-1], \ - "Prompt is longer than audio to generate" - - callback = None - if progress: - callback = _progress_callback - - if self.duration <= self.max_duration: - # generate by sampling from LM, simple case. - with self.autocast: - attributes[0].text['description'] = prompt_list[0] - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=total_gen_len, **self.generation_params) - - else: - all_tokens = [] - if prompt_tokens is None: - prompt_length = 0 - else: - all_tokens.append(prompt_tokens) - prompt_length = prompt_tokens.shape[-1] - - stride_tokens = int(self.frame_rate * self.extend_stride) - - while current_gen_offset + prompt_length < total_gen_len: - time_offset = current_gen_offset / self.frame_rate - chunk_duration = min(self.duration - time_offset, self.max_duration) - max_gen_len = int(chunk_duration * self.frame_rate) - with self.autocast: - if i >= len(prompt_list): - i = len(prompt_list) - 1 - attributes[0].text['description'] = prompt_list[i] - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=max_gen_len, **self.generation_params) - i = i + 1 - if prompt_tokens is None: - all_tokens.append(gen_tokens) - else: - all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:]) - prompt_tokens = gen_tokens[:, :, stride_tokens:] - prompt_length = prompt_tokens.shape[-1] - current_gen_offset += stride_tokens - - gen_tokens = torch.cat(all_tokens, dim=-1) - - # generate audio - assert gen_tokens.dim() == 3 - with torch.no_grad(): - gen_audio = self.compression_model.decode(gen_tokens, None) - return gen_audio - - def to(self, device: str): - self.compression_model.to(device) - self.lm.to(device) - return self \ No newline at end of file diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/builders.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/builders.py deleted file mode 100644 index 038bf99c3d0fbbb86005683d5a2a1b4edcac4298..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/builders.py +++ /dev/null @@ -1,252 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -All the functions to build the relevant models and modules -from the Hydra config. -""" - -import typing as tp - -import audiocraft -import omegaconf -import torch - -from .encodec import CompressionModel, EncodecModel -from .lm import LMModel -from ..modules.codebooks_patterns import ( - CodebooksPatternProvider, - DelayedPatternProvider, - MusicLMPattern, - ParallelPatternProvider, - UnrolledPatternProvider, - VALLEPattern, -) -from ..modules.conditioners import ( - BaseConditioner, - ChromaStemConditioner, - CLAPEmbeddingConditioner, - ConditionFuser, - ConditioningProvider, - LUTConditioner, - T5Conditioner, -) -from .unet import DiffusionUnet -from .. import quantization as qt -from ..utils.utils import dict_from_config -from ..modules.diffusion_schedule import MultiBandProcessor, SampleProcessor - - -def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer: - klass = { - 'no_quant': qt.DummyQuantizer, - 'rvq': qt.ResidualVectorQuantizer - }[quantizer] - kwargs = dict_from_config(getattr(cfg, quantizer)) - if quantizer != 'no_quant': - kwargs['dimension'] = dimension - return klass(**kwargs) - - -def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig): - if encoder_name == 'seanet': - kwargs = dict_from_config(getattr(cfg, 'seanet')) - encoder_override_kwargs = kwargs.pop('encoder') - decoder_override_kwargs = kwargs.pop('decoder') - encoder_kwargs = {**kwargs, **encoder_override_kwargs} - decoder_kwargs = {**kwargs, **decoder_override_kwargs} - encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs) - return encoder, decoder - else: - raise KeyError(f"Unexpected compression model {cfg.compression_model}") - - -def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel: - """Instantiate a compression model.""" - if cfg.compression_model == 'encodec': - kwargs = dict_from_config(getattr(cfg, 'encodec')) - encoder_name = kwargs.pop('autoencoder') - quantizer_name = kwargs.pop('quantizer') - encoder, decoder = get_encodec_autoencoder(encoder_name, cfg) - quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension) - frame_rate = kwargs['sample_rate'] // encoder.hop_length - renormalize = kwargs.pop('renormalize', False) - # deprecated params - kwargs.pop('renorm', None) - return EncodecModel(encoder, decoder, quantizer, - frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device) - else: - raise KeyError(f"Unexpected compression model {cfg.compression_model}") - - -def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel: - """Instantiate a transformer LM.""" - if cfg.lm_model == 'transformer_lm': - kwargs = dict_from_config(getattr(cfg, 'transformer_lm')) - n_q = kwargs['n_q'] - q_modeling = kwargs.pop('q_modeling', None) - codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern') - attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout')) - cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance')) - cfg_prob, cfg_coef = cls_free_guidance['training_dropout'], cls_free_guidance['inference_coef'] - fuser = get_condition_fuser(cfg) - condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device) - if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programmatically - kwargs['cross_attention'] = True - if codebooks_pattern_cfg.modeling is None: - assert q_modeling is not None, \ - "LM model should either have a codebook pattern defined or transformer_lm.q_modeling" - codebooks_pattern_cfg = omegaconf.OmegaConf.create( - {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}} - ) - pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg) - return LMModel( - pattern_provider=pattern_provider, - condition_provider=condition_provider, - fuser=fuser, - cfg_dropout=cfg_prob, - cfg_coef=cfg_coef, - attribute_dropout=attribute_dropout, - dtype=getattr(torch, cfg.dtype), - device=cfg.device, - **kwargs - ).to(cfg.device) - else: - raise KeyError(f"Unexpected LM model {cfg.lm_model}") - - -def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider: - """Instantiate a conditioning model.""" - device = cfg.device - duration = cfg.dataset.segment_duration - cfg = getattr(cfg, 'conditioners') - dict_cfg = {} if cfg is None else dict_from_config(cfg) - conditioners: tp.Dict[str, BaseConditioner] = {} - condition_provider_args = dict_cfg.pop('args', {}) - condition_provider_args.pop('merge_text_conditions_p', None) - condition_provider_args.pop('drop_desc_p', None) - - for cond, cond_cfg in dict_cfg.items(): - model_type = cond_cfg['model'] - model_args = cond_cfg[model_type] - if model_type == 't5': - conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args) - elif model_type == 'lut': - conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args) - elif model_type == 'chroma_stem': - conditioners[str(cond)] = ChromaStemConditioner( - output_dim=output_dim, - duration=duration, - device=device, - **model_args - ) - elif model_type == 'clap': - conditioners[str(cond)] = CLAPEmbeddingConditioner( - output_dim=output_dim, - device=device, - **model_args - ) - else: - raise ValueError(f"Unrecognized conditioning model: {model_type}") - conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args) - return conditioner - - -def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser: - """Instantiate a condition fuser object.""" - fuser_cfg = getattr(cfg, 'fuser') - fuser_methods = ['sum', 'cross', 'prepend', 'input_interpolate'] - fuse2cond = {k: fuser_cfg[k] for k in fuser_methods} - kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods} - fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs) - return fuser - - -def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider: - """Instantiate a codebooks pattern provider object.""" - pattern_providers = { - 'parallel': ParallelPatternProvider, - 'delay': DelayedPatternProvider, - 'unroll': UnrolledPatternProvider, - 'valle': VALLEPattern, - 'musiclm': MusicLMPattern, - } - name = cfg.modeling - kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {} - klass = pattern_providers[name] - return klass(n_q, **kwargs) - - -def get_debug_compression_model(device='cpu', sample_rate: int = 32000): - """Instantiate a debug compression model to be used for unit tests.""" - assert sample_rate in [16000, 32000], "unsupported sample rate for debug compression model" - model_ratios = { - 16000: [10, 8, 8], # 25 Hz at 16kHz - 32000: [10, 8, 16] # 25 Hz at 32kHz - } - ratios: tp.List[int] = model_ratios[sample_rate] - frame_rate = 25 - seanet_kwargs: dict = { - 'n_filters': 4, - 'n_residual_layers': 1, - 'dimension': 32, - 'ratios': ratios, - } - print(seanet_kwargs) - encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs) - quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4) - init_x = torch.randn(8, 32, 128) - quantizer(init_x, 1) # initialize kmeans etc. - compression_model = EncodecModel( - encoder, decoder, quantizer, - frame_rate=frame_rate, sample_rate=sample_rate, channels=1).to(device) - return compression_model.eval() - - -def get_diffusion_model(cfg: omegaconf.DictConfig): - # TODO Find a way to infer the channels from dset - channels = cfg.channels - num_steps = cfg.schedule.num_steps - return DiffusionUnet( - chin=channels, num_steps=num_steps, **cfg.diffusion_unet) - - -def get_processor(cfg, sample_rate: int = 24000): - sample_processor = SampleProcessor() - if cfg.use: - kw = dict(cfg) - kw.pop('use') - kw.pop('name') - if cfg.name == "multi_band_processor": - sample_processor = MultiBandProcessor(sample_rate=sample_rate, **kw) - return sample_processor - - -def get_debug_lm_model(device='cpu'): - """Instantiate a debug LM to be used for unit tests.""" - pattern = DelayedPatternProvider(n_q=4) - dim = 16 - providers = { - 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"), - } - condition_provider = ConditioningProvider(providers) - fuser = ConditionFuser( - {'cross': ['description'], 'prepend': [], - 'sum': [], 'input_interpolate': []}) - lm = LMModel( - pattern, condition_provider, fuser, - n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2, - cross_attention=True, causal=True) - return lm.to(device).eval() - - -def get_wrapped_compression_model( - compression_model: CompressionModel, - cfg: omegaconf.DictConfig) -> CompressionModel: - # more to come. - return compression_model diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/seanet.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/seanet.py deleted file mode 100644 index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/seanet.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch.nn as nn - -from .conv import StreamableConv1d, StreamableConvTranspose1d -from .lstm import StreamableLSTM - - -class SEANetResnetBlock(nn.Module): - """Residual block from SEANet model. - - Args: - dim (int): Dimension of the input/output. - kernel_sizes (list): List of kernel sizes for the convolutions. - dilations (list): List of dilations for the convolutions. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection. - """ - def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1], - activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False, - pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True): - super().__init__() - assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations' - act = getattr(nn, activation) - hidden = dim // compress - block = [] - for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)): - in_chs = dim if i == 0 else hidden - out_chs = dim if i == len(kernel_sizes) - 1 else hidden - block += [ - act(**activation_params), - StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation, - norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - self.block = nn.Sequential(*block) - self.shortcut: nn.Module - if true_skip: - self.shortcut = nn.Identity() - else: - self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode) - - def forward(self, x): - return self.shortcut(x) + self.block(x) - - -class SEANetEncoder(nn.Module): - """SEANet encoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of - upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here - that must match the decoder order. We use the decoder order as some models may only employ the decoder. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the encoder, it corresponds to the N first blocks. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0): - super().__init__() - self.channels = channels - self.dimension = dimension - self.n_filters = n_filters - self.ratios = list(reversed(ratios)) - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = 1 - model: tp.List[nn.Module] = [ - StreamableConv1d(channels, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Downsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - norm=block_norm, norm_params=norm_params, - activation=activation, activation_params=activation_params, - causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - # Add downsampling layers - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, mult * n_filters * 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - mult *= 2 - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, dimension, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - self.model = nn.Sequential(*model) - - def forward(self, x): - return self.model(x) - - -class SEANetDecoder(nn.Module): - """SEANet decoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - final_activation (str): Final activation function after all convolutions. - final_activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple. - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the decoder, it corresponds to the N last blocks. - trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup. - If equal to 1.0, it means that all the trimming is done at the right. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0): - super().__init__() - self.dimension = dimension - self.channels = channels - self.n_filters = n_filters - self.ratios = ratios - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = int(2 ** len(self.ratios)) - model: tp.List[nn.Module] = [ - StreamableConv1d(dimension, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - # Upsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm - # Add upsampling layers - model += [ - act(**activation_params), - StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, trim_right_ratio=trim_right_ratio), - ] - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - activation=activation, activation_params=activation_params, - norm=block_norm, norm_params=norm_params, causal=causal, - pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - mult //= 2 - - # Add final layers - model += [ - act(**activation_params), - StreamableConv1d(n_filters, channels, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Add optional final activation to decoder (eg. tanh) - if final_activation is not None: - final_act = getattr(nn, final_activation) - final_activation_params = final_activation_params or {} - model += [ - final_act(**final_activation_params) - ] - self.model = nn.Sequential(*model) - - def forward(self, z): - y = self.model(z) - return y diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/ContentVec256L9_Onnx.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/ContentVec256L9_Onnx.py deleted file mode 100644 index fae2b928252801795b038f51451b234e007f6f03..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/vencoder/ContentVec256L9_Onnx.py +++ /dev/null @@ -1,28 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import onnxruntime -import torch - -class ContentVec256L9_Onnx(SpeechEncoder): - def __init__(self,vec_path = "pretrain/vec-256-layer-9.onnx",device=None): - print("load model(s) from {}".format(vec_path)) - self.hidden_dim = 256 - if device is None: - self.dev = torch.device("cpu") - else: - self.dev = torch.device(device) - if device == 'cpu' or device == torch.device("cpu") or device is None: - providers = ['CPUExecutionProvider'] - elif device == 'cuda' or device == torch.device("cuda"): - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - feats = feats.unsqueeze(0).cpu().detach().numpy() - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input) - return torch.tensor(logits[0]).transpose(1, 2).to(self.dev) \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/loss/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/loss/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py deleted file mode 100644 index 062bb82f669f63a537b6ee8df4d42d292eb2575e..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py +++ /dev/null @@ -1,201 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import nltk -from misc.bleu_utils import sentence_bleu -import warnings - - -def get_target_sequences(manifest, ground_truth, to_take=1000): - import json - import pathlib - - with open(ground_truth, 'r') as fin: - original_continuations = json.loads(fin.read()) - - sequence2length = [(k, v[0]) for k, v in original_continuations.items()] - assert all(float(v) >= 6.0 for (_, v) in sequence2length) # 6 seconds - - sequence2length.sort(key=lambda x: x[1]) - to_take_sequences = set(v[0] for v in sequence2length[:to_take]) - to_take_ids = [] - - with open(manifest, 'r') as f: - f.readline() - - for i, line in enumerate(f.readlines()): - seq_id = line.split()[0] - seq_id = pathlib.Path(seq_id).name.split('__')[0] - - if seq_id in to_take_sequences: - to_take_ids.append(i) - - print(f'Took {len(to_take_ids)} ids') - return set(to_take_ids) - - -def get_args(): - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument('--asr-transcript', type=str, - help='Path to the transcript file.') - - parser.add_argument('--manifest', required=True) - parser.add_argument('--prompts-description', required=True) - - parser.add_argument('--cut-id', action='store_true', - help='Whether cut the first token (typically a seq id)') - parser.add_argument('--cut-tail', action='store_true', - help='Whether cut the last token (typically a speaker id)') - parser.add_argument('--debug', action='store_true') - - args = parser.parse_args() - - return args - - -def get_self_bleu(utterances, averaging_mode, weights): - self_bleu = [] - - for i in range(len(utterances)): - hypo = utterances[i] - rest = utterances[:i] + utterances[i+1:] - - self_bleu.append(sentence_bleu(rest, hypo, weights, - no_length_penalty=True, averaging_mode=averaging_mode)) - - return self_bleu - - -def get_self_bleu2_arithmetic(utterances): - weights = (0.5, 0.5) # equal weight for unigrams and bigrams - return get_self_bleu(utterances, averaging_mode='arithmetic', weights=weights) - - -def get_self_bleu2_geometric(utterances): - weights = (0.5, 0.5) - return get_self_bleu(utterances, averaging_mode='geometric', weights=weights) - - -def get_auto_bleu2_arithmetic(utterances): - weights = (0.5, 0.5) - return [auto_bleu(u, mean_mode='arithmetic', weights=weights) for u in utterances] - - -def get_auto_bleu2_geometric(utterances): - weights = (0.5, 0.5) - return [auto_bleu(u, mean_mode='geometric', weights=weights) for u in utterances] - - -def get_auto_bleu3_geometric(utterances): - weights = (1./3, 1./3, 1./3) - return [auto_bleu(u, mean_mode='geometric', weights=weights) for u in utterances] - - -def get_auto_bleu3_arithmetic(utterances): - weights = (1./3, 1./3, 1./3) - return [auto_bleu(u, mean_mode='arithmetic', weights=weights) for u in utterances] - - -def get_self_bleu3_arithmetic(utterances): - weights = (1./3, 1./3, 1./3) - return get_self_bleu(utterances, averaging_mode='arithmetic', weights=weights) - - -def get_self_bleu3_geometric(utterances): - weights = (1./3, 1./3, 1./3) - return get_self_bleu(utterances, averaging_mode='geometric', weights=weights) - - -def auto_bleu(sentence, weights, mean_mode='arithmetic'): - if len(sentence) <= 1: - return 0 - - N = len(weights) - - bleu_n = np.zeros([N]) - for n in range(N): - targ_ngrams = list(nltk.ngrams(sentence, n+1)) - for p in range(len(targ_ngrams)): - left = sentence[:p] - right = sentence[(p+n+1):] - rest_ngrams = list(nltk.ngrams(left, n+1)) + \ - list(nltk.ngrams(right, n+1)) - # compute the nb of matching ngrams - bleu_n[n] += targ_ngrams[p] in rest_ngrams - bleu_n[n] /= len(targ_ngrams) # average them to get a proportion - - weights = np.array(weights) - if mean_mode == 'arithmetic': - return (bleu_n * weights).sum() - elif mean_mode == 'geometric': - return (bleu_n ** weights).prod() - else: - raise ValueError(f'Unknown agggregation mode {mean_mode}') - - -def main(): - from multiprocessing import Pool - - args = get_args() - target_ids = get_target_sequences(args.manifest, args.prompts_description) - - with open(args.asr_transcript, 'r') as fin: - lines = fin.readlines() - - terms = [x.strip().split() for x in lines] - filtered = [] - for term in terms: - line_id = int(term[-1].split('-')[1][:-1]) - if line_id in target_ids: - filtered.append(term) - terms = filtered - - if args.cut_id: - terms = [x[1:] for x in terms] - if args.cut_tail: - terms = [x[:-1] for x in terms] - - if args.debug: - terms = terms[:10] - - tasks = [ - ('Self-BLEU2-arithmetic', get_self_bleu2_arithmetic), - ('Self-BLEU2-geometric', get_self_bleu2_geometric), - ('Auto-BLEU2-arithmetic', get_auto_bleu2_arithmetic), - ('Auto-BLEU2-geometric', get_auto_bleu2_geometric), - - ('Self-BLEU3-arithmetic', get_self_bleu3_arithmetic), - ('Self-BLEU3-geometric', get_self_bleu3_geometric), - ('Auto-BLEU3-arithmetic', get_auto_bleu3_arithmetic), - ('Auto-BLEU3-geometric', get_auto_bleu3_geometric), - ] - - n_processes = min(16, len(tasks)) - with Pool(n_processes) as pool: - metrics = pool.map(run_f, [(t[1], terms) for t in tasks]) - - for (metric_name, _), metric in zip(tasks, metrics): - metric, sem = np.mean(metric), np.std(metric) / np.sqrt(len(metric)) - - metric, sem = [ - round(100 * x, 2) for x in [metric, sem] - ] - - print(f'{metric_name} {metric} +- {sem}') - - -def run_f(task_params): - f, terms = task_params - return f(terms) - - -if __name__ == '__main__': - # NLTK produces warnings - warnings.filterwarnings("ignore") - - main() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/numbers.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/numbers.py deleted file mode 100644 index 0d5f7fa818a45ecf132627d240afac653e148070..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/numbers.py +++ /dev/null @@ -1,71 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -import inflect -import re - - -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text diff --git a/spaces/Harsh502s/Anime-Recommender/Pages/Recommender App.py b/spaces/Harsh502s/Anime-Recommender/Pages/Recommender App.py deleted file mode 100644 index 0582899de73e31c7bdbac4b67c73f15bd2dfdb33..0000000000000000000000000000000000000000 --- a/spaces/Harsh502s/Anime-Recommender/Pages/Recommender App.py +++ /dev/null @@ -1,532 +0,0 @@ -import streamlit as st -import pandas as pd -import pickle -from ast import literal_eval -import webbrowser - - -# Importing the dataset -@st.cache_data -def load_data(): - try: - anime_data = pd.read_csv(r"rec_data.csv") - except: - st.error("Dataset Not Found") - return anime_data - - -anime_data = load_data() - - -def get_genres(): - genres = sorted( - list(set([j for i in anime_data["genres"] for j in literal_eval(i)])) - ) - genres.insert(0, "All Genres") - genres.remove("NA") - return genres - - -# Uncomment this if you want to load the model -@st.cache_resource -def load_model(): - try: - similarity = pickle.load(open(r"similarity.pkl", "rb")) - except: - st.error("Model Not Found") - return similarity - - -similarity = load_model() - - -# Fetching the poster and url of the anime -def fetch_anime_url(anime_id): - url = anime_data[anime_data["anime_id"] == anime_id].anime_url.values[0] - return url - - -def fetch_poster(anime_id): - poster = anime_data[anime_data["anime_id"] == anime_id].poster.values[0] - return poster - - -# Recommender System -def recommend(anime, genre=None): - if genre == None: - index = ( - anime_data[anime_data["title"] == anime] - .sort_values("score", ascending=False) - .index[0] - ) - elif genre != None: - index = ( - anime_data[ - (anime_data["title"] == anime) - | (anime_data["genres"].str.contains(genre)) - ] - .sort_values("score", ascending=False) - .index[0] - ) - # index = anime_data[anime_data["title"] == anime].index[0] - distances = sorted( - list(enumerate(similarity[index])), reverse=True, key=lambda x: x[1] - ) - - recommended_anime_names = [] - recommended_anime_posters = [] - recommended_anime_urls = [] - - for i in distances[1:9]: - # fetch the anime poster - anime_id = anime_data.iloc[i[0]].anime_id - recommended_anime_posters.append(fetch_poster(anime_id)) - recommended_anime_names.append(anime_data.iloc[i[0]].title) - recommended_anime_urls.append(fetch_anime_url(anime_id)) - - return recommended_anime_names, recommended_anime_posters, recommended_anime_urls - - -# Function to display the top 8 animes with the highest rating -def top_animes(): - style_for_page = """ - - """ - st.markdown(style_for_page, unsafe_allow_html=True) - - top8 = anime_data.sort_values("score", ascending=False).head(8) - - with st.container(): - col0, col1, col2, col3 = st.columns(4) - with col0: - st.button( - label=f"{top8.iloc[0].title}", - key=top8.iloc[0].title, - on_click=lambda: webbrowser.open_new_tab(top8.iloc[0].anime_url), - use_container_width=True, - ) - st.image(top8.iloc[0].poster, use_column_width=True) - with col1: - st.button( - label=f"{top8.iloc[1].title}", - key=top8.iloc[1].title, - on_click=lambda: webbrowser.open_new_tab(top8.iloc[1].anime_url), - use_container_width=True, - ) - st.image(top8.iloc[1].poster, use_column_width=True) - with col2: - st.button( - label=f"{top8.iloc[2].title}", - key=top8.iloc[2].title, - on_click=lambda: webbrowser.open_new_tab(top8.iloc[2].anime_url), - use_container_width=True, - ) - st.image(top8.iloc[2].poster, use_column_width=True) - with col3: - st.button( - label=f"{top8.iloc[3].title}", - key=top8.iloc[3].title, - on_click=lambda: webbrowser.open_new_tab(top8.iloc[3].anime_url), - use_container_width=True, - ) - st.image(top8.iloc[3].poster, use_column_width=True) - - st.divider() - - with st.container(): - col4, col5, col6, col7 = st.columns(4) - with col4: - st.button( - label=f"{top8.iloc[4].title}", - key=top8.iloc[4].title, - on_click=lambda: webbrowser.open_new_tab(top8.iloc[4].anime_url), - use_container_width=True, - ) - st.image(top8.iloc[4].poster, use_column_width=True) - with col5: - st.button( - label=f"{top8.iloc[5].title}", - key=top8.iloc[5].title, - on_click=lambda: webbrowser.open_new_tab(top8.iloc[5].anime_url), - use_container_width=True, - ) - st.image(top8.iloc[5].poster, use_column_width=True) - with col6: - st.button( - label=f"{top8.iloc[6].title}", - key=top8.iloc[6].title, - on_click=lambda: webbrowser.open_new_tab(top8.iloc[6].anime_url), - use_container_width=True, - ) - st.image(top8.iloc[6].poster, use_column_width=True) - with col7: - st.button( - label=f"{top8.iloc[7].title}", - key=top8.iloc[7].title, - on_click=lambda: webbrowser.open_new_tab(top8.iloc[7].anime_url), - use_container_width=True, - ) - st.image(top8.iloc[7].poster, use_column_width=True) - - -# Function to display the top 8 animes for user given genre -def top_animes_genres(genre_select): - style_for_page = """ - - """ - st.markdown(style_for_page, unsafe_allow_html=True) - - top_8_genre = anime_data[ - anime_data["genres"].str.contains(genre_select) - ].sort_values("score", ascending=False)[:8] - col0, col1, col2, col3 = st.columns(4) - with col0: - st.button( - label=f"{top_8_genre.iloc[0].title}", - key=top_8_genre.iloc[0].title, - on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[0].anime_url), - use_container_width=True, - ) - st.image(top_8_genre.iloc[0].poster, use_column_width=True) - with col1: - st.button( - label=f"{top_8_genre.iloc[1].title}", - key=top_8_genre.iloc[1].title, - on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[1].anime_url), - use_container_width=True, - ) - st.image(top_8_genre.iloc[1].poster, use_column_width=True) - with col2: - st.button( - label=f"{top_8_genre.iloc[2].title}", - key=top_8_genre.iloc[2].title, - on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[2].anime_url), - use_container_width=True, - ) - st.image(top_8_genre.iloc[2].poster, use_column_width=True) - with col3: - st.button( - label=f"{top_8_genre.iloc[3].title}", - key=top_8_genre.iloc[3].title, - on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[3].anime_url), - use_container_width=True, - ) - st.image(top_8_genre.iloc[3].poster, use_column_width=True) - - st.divider() - - col4, col5, col6, col7 = st.columns(4) - with col4: - st.button( - label=f"{top_8_genre.iloc[4].title}", - key=top_8_genre.iloc[4].title, - on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[4].anime_url), - use_container_width=True, - ) - st.image(top_8_genre.iloc[4].poster, use_column_width=True) - with col5: - st.button( - label=f"{top_8_genre.iloc[5].title}", - key=top_8_genre.iloc[5].title, - on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[5].anime_url), - use_container_width=True, - ) - st.image(top_8_genre.iloc[5].poster, use_column_width=True) - with col6: - st.button( - label=f"{top_8_genre.iloc[6].title}", - key=top_8_genre.iloc[6].title, - on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[6].anime_url), - use_container_width=True, - ) - st.image(top_8_genre.iloc[6].poster, use_column_width=True) - with col7: - st.button( - label=f"{top_8_genre.iloc[7].title}", - key=top_8_genre.iloc[7].title, - on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[7].anime_url), - use_container_width=True, - ) - st.image(top_8_genre.iloc[7].poster, use_column_width=True) - - -# Function to display the top 8 animes with user given anime name for all genres -def top_animes_custom(anime_select): - style_for_page = """ - - """ - st.markdown(style_for_page, unsafe_allow_html=True) - - ( - recommended_anime_names, - recommended_anime_posters, - recommended_anime_urls, - ) = recommend(anime_select) - with st.container(): - col0, col1, col2, col3 = st.columns(4) - with col0: - st.button( - label=f"{recommended_anime_names[0]}", - key=recommended_anime_names[0], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[0]), - use_container_width=True, - ) - st.image(recommended_anime_posters[0], use_column_width=True) - with col1: - st.button( - label=f"{recommended_anime_names[1]}", - key=recommended_anime_names[1], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[1]), - use_container_width=True, - ) - st.image(recommended_anime_posters[1], use_column_width=True) - with col2: - st.button( - label=f"{recommended_anime_names[2]}", - key=recommended_anime_names[2], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[2]), - use_container_width=True, - ) - st.image(recommended_anime_posters[2], use_column_width=True) - with col3: - st.button( - label=f"{recommended_anime_names[3]}", - key=recommended_anime_names[3], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[3]), - use_container_width=True, - ) - st.image(recommended_anime_posters[3], use_column_width=True) - - st.divider() - - with st.container(): - col4, col5, col6, col7 = st.columns(4) - with col4: - st.button( - label=f"{recommended_anime_names[4]}", - key=recommended_anime_names[4], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[4]), - use_container_width=True, - ) - st.image(recommended_anime_posters[4], use_column_width=True) - with col5: - st.button( - label=f"{recommended_anime_names[5]}", - key=recommended_anime_names[5], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[5]), - use_container_width=True, - ) - st.image(recommended_anime_posters[5], use_column_width=True) - with col6: - st.button( - label=f"{recommended_anime_names[6]}", - key=recommended_anime_names[6], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[6]), - use_container_width=True, - ) - st.image(recommended_anime_posters[6], use_column_width=True) - with col7: - st.button( - label=f"{recommended_anime_names[7]}", - key=recommended_anime_names[7], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[7]), - use_container_width=True, - ) - st.image(recommended_anime_posters[7], use_column_width=True) - - -# Function to display the top 8 animes with user given anime name and genre -def top_animes_custom_genres(anime_select, genre_select): - style_for_page = """ - - """ - st.markdown(style_for_page, unsafe_allow_html=True) - - ( - recommended_anime_names, - recommended_anime_posters, - recommended_anime_urls, - ) = recommend(anime_select, genre_select) - with st.container(): - col0, col1, col2, col3 = st.columns(4) - with col0: - st.button( - label=f"{recommended_anime_names[0]}", - key=recommended_anime_names[0], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[0]), - use_container_width=True, - ) - st.image(recommended_anime_posters[0], use_column_width=True) - with col1: - st.button( - label=f"{recommended_anime_names[1]}", - key=recommended_anime_names[1], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[1]), - use_container_width=True, - ) - st.image(recommended_anime_posters[1], use_column_width=True) - with col2: - st.button( - label=f"{recommended_anime_names[2]}", - key=recommended_anime_names[2], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[2]), - use_container_width=True, - ) - st.image(recommended_anime_posters[2], use_column_width=True) - with col3: - st.button( - label=f"{recommended_anime_names[3]}", - key=recommended_anime_names[3], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[3]), - use_container_width=True, - ) - st.image(recommended_anime_posters[3], use_column_width=True) - - st.divider() - - with st.container(): - col4, col5, col6, col7 = st.columns(4) - with col4: - st.button( - label=f"{recommended_anime_names[4]}", - key=recommended_anime_names[4], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[4]), - use_container_width=True, - ) - st.image(recommended_anime_posters[4], use_column_width=True) - with col5: - st.button( - label=f"{recommended_anime_names[5]}", - key=recommended_anime_names[5], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[5]), - use_container_width=True, - ) - st.image(recommended_anime_posters[5], use_column_width=True) - with col6: - st.button( - label=f"{recommended_anime_names[6]}", - key=recommended_anime_names[6], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[6]), - use_container_width=True, - ) - st.image(recommended_anime_posters[6], use_column_width=True) - with col7: - st.button( - label=f"{recommended_anime_names[7]}", - key=recommended_anime_names[7], - on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[7]), - use_container_width=True, - ) - st.image(recommended_anime_posters[7], use_column_width=True) - - -# Recommender Page -def recommender_page(): - style_for_page = """ - - """ - st.markdown(style_for_page, unsafe_allow_html=True) - - st.title("Anime Recommendation System :ninja:") - - anime_list = anime_data["title"].tolist() - anime_list.sort() - anime_list.insert(0, "Top 8 Animes") - anime_select = st.selectbox("Select an Anime", anime_list, key="anime_select") - genre_select = st.selectbox("Select a Genre", get_genres(), key="genre_select") - - if st.button("Recommendation"): - st.divider() - if anime_select == "Top 8 Animes" and genre_select == "All Genres": - top_animes() - st.divider() - elif anime_select == "Top 8 Animes" and genre_select != "All Genres": - top_animes_genres(genre_select) - st.divider() - elif anime_select != "Top 8 Animes" and genre_select == "All Genres": - top_animes_custom(anime_select) - st.divider() - elif anime_select != "Top 8 Animes" and genre_select != "All Genres": - top_animes_custom_genres(anime_select, genre_select) - st.divider() - - -if __name__ == "__main__": - recommender_page() diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/modules.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/modules.py deleted file mode 100644 index a192251aaccb036780d77d6c8b538b652a5e24e2..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/modules.py +++ /dev/null @@ -1,276 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -import commons - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-4): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - n_dims = len(x.shape) - mean = torch.mean(x, 1, keepdim=True) - variance = torch.mean((x - mean) ** 2, 1, keepdim=True) - - x = (x - mean) * torch.rsqrt(variance + self.eps) - - shape = [1, -1] + [1] * (n_dims - 2) - x = x * self.gamma.view(*shape) + self.beta.view(*shape) - return x - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - assert hidden_channels % 2 == 0 - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask=None, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - x_in = self.drop(x_in) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - x = (x + res_skip_acts[:, : self.hidden_channels, :]) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ActNorm(nn.Module): - def __init__(self, channels, ddi=False, **kwargs): - super().__init__() - self.channels = channels - self.initialized = not ddi - - self.logs = nn.Parameter(torch.zeros(1, channels, 1)) - self.bias = nn.Parameter(torch.zeros(1, channels, 1)) - - def forward(self, x, x_mask=None, reverse=False, **kwargs): - if x_mask is None: - x_mask = torch.ones(x.size(0), 1, x.size(2)).to( - device=x.device, dtype=x.dtype - ) - x_len = torch.sum(x_mask, [1, 2]) - if not self.initialized: - self.initialize(x, x_mask) - self.initialized = True - - if reverse: - z = (x - self.bias) * torch.exp(-self.logs) * x_mask - logdet = None - else: - z = (self.bias + torch.exp(self.logs) * x) * x_mask - logdet = torch.sum(self.logs) * x_len # [b] - - return z, logdet - - def store_inverse(self): - pass - - def set_ddi(self, ddi): - self.initialized = not ddi - - def initialize(self, x, x_mask): - with torch.no_grad(): - denom = torch.sum(x_mask, [0, 2]) - m = torch.sum(x * x_mask, [0, 2]) / denom - m_sq = torch.sum(x * x * x_mask, [0, 2]) / denom - v = m_sq - (m ** 2) - logs = 0.5 * torch.log(torch.clamp_min(v, 1e-6)) - - bias_init = ( - (-m * torch.exp(-logs)).view(*self.bias.shape).to(dtype=self.bias.dtype) - ) - logs_init = (-logs).view(*self.logs.shape).to(dtype=self.logs.dtype) - - self.bias.data.copy_(bias_init) - self.logs.data.copy_(logs_init) - - -class InvConvNear(nn.Module): - def __init__(self, channels, n_split=4, no_jacobian=False, **kwargs): - super().__init__() - assert n_split % 2 == 0 - self.channels = channels - self.n_split = n_split - self.no_jacobian = no_jacobian - - w_init = torch.qr(torch.FloatTensor(self.n_split, self.n_split).normal_())[0] - if torch.det(w_init) < 0: - w_init[:, 0] = -1 * w_init[:, 0] - self.weight = nn.Parameter(w_init) - - def forward(self, x, x_mask=None, reverse=False, **kwargs): - b, c, t = x.size() - assert c % self.n_split == 0 - if x_mask is None: - x_mask = 1 - x_len = torch.ones((b,), dtype=x.dtype, device=x.device) * t - else: - x_len = torch.sum(x_mask, [1, 2]) - - x = x.view(b, 2, c // self.n_split, self.n_split // 2, t) - x = ( - x.permute(0, 1, 3, 2, 4) - .contiguous() - .view(b, self.n_split, c // self.n_split, t) - ) - - if reverse: - if hasattr(self, "weight_inv"): - weight = self.weight_inv - else: - weight = torch.inverse(self.weight.float()).to(dtype=self.weight.dtype) - logdet = None - else: - weight = self.weight - if self.no_jacobian: - logdet = 0 - else: - logdet = torch.logdet(self.weight) * (c / self.n_split) * x_len # [b] - - weight = weight.view(self.n_split, self.n_split, 1, 1) - z = F.conv2d(x, weight) - - z = z.view(b, 2, self.n_split // 2, c // self.n_split, t) - z = z.permute(0, 1, 3, 2, 4).contiguous().view(b, c, t) * x_mask - return z, logdet - - def store_inverse(self): - self.weight_inv = torch.inverse(self.weight.float()).to(dtype=self.weight.dtype) diff --git a/spaces/Hila/RobustViT/app.py b/spaces/Hila/RobustViT/app.py deleted file mode 100644 index ddb0ea6e42736d06def4d2e21353dbb6c857321b..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/app.py +++ /dev/null @@ -1,175 +0,0 @@ -import torch -import timm -import gradio as gr -from huggingface_hub import hf_hub_download -import os -from ViT.ViT_new import vit_base_patch16_224 as vit -import torchvision.transforms as transforms -import requests -from PIL import Image -import numpy as np -import cv2 -import pathlib - - -# create heatmap from mask on image -def show_cam_on_image(img, mask): - heatmap = cv2.applyColorMap(np.uint8(255 * mask), cv2.COLORMAP_JET) - heatmap = np.float32(heatmap) / 255 - cam = heatmap + np.float32(img) - cam = cam / np.max(cam) - return cam - -start_layer = 0 - -# rule 5 from paper -def avg_heads(cam, grad): - cam = cam.reshape(-1, cam.shape[-2], cam.shape[-1]) - grad = grad.reshape(-1, grad.shape[-2], grad.shape[-1]) - cam = grad * cam - cam = cam.clamp(min=0).mean(dim=0) - return cam - -# rule 6 from paper -def apply_self_attention_rules(R_ss, cam_ss): - R_ss_addition = torch.matmul(cam_ss, R_ss) - return R_ss_addition - -def generate_relevance(model, input, index=None): - output = model(input, register_hook=True) - if index == None: - index = np.argmax(output.cpu().data.numpy(), axis=-1) - - one_hot = np.zeros((1, output.size()[-1]), dtype=np.float32) - one_hot[0, index] = 1 - one_hot_vector = one_hot - one_hot = torch.from_numpy(one_hot).requires_grad_(True) - one_hot = torch.sum(one_hot * output) - model.zero_grad() - one_hot.backward(retain_graph=True) - - num_tokens = model.blocks[0].attn.get_attention_map().shape[-1] - R = torch.eye(num_tokens, num_tokens) - for i,blk in enumerate(model.blocks): - if i < start_layer: - continue - grad = blk.attn.get_attn_gradients() - cam = blk.attn.get_attention_map() - cam = avg_heads(cam, grad) - R += apply_self_attention_rules(R, cam) - return R[0, 1:] - -def generate_visualization(model, original_image, class_index=None): - with torch.enable_grad(): - transformer_attribution = generate_relevance(model, original_image.unsqueeze(0), index=class_index).detach() - transformer_attribution = transformer_attribution.reshape(1, 1, 14, 14) - transformer_attribution = torch.nn.functional.interpolate(transformer_attribution, scale_factor=16, mode='bilinear') - transformer_attribution = transformer_attribution.reshape(224, 224).data.cpu().numpy() - transformer_attribution = (transformer_attribution - transformer_attribution.min()) / (transformer_attribution.max() - transformer_attribution.min()) - - image_transformer_attribution = original_image.permute(1, 2, 0).data.cpu().numpy() - image_transformer_attribution = (image_transformer_attribution - image_transformer_attribution.min()) / (image_transformer_attribution.max() - image_transformer_attribution.min()) - vis = show_cam_on_image(image_transformer_attribution, transformer_attribution) - vis = np.uint8(255 * vis) - vis = cv2.cvtColor(np.array(vis), cv2.COLOR_RGB2BGR) - return vis - -model_finetuned = None -model = None - -normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) -transform_224 = transforms.Compose([ - transforms.ToTensor(), - normalize, -]) - -# Download human-readable labels for ImageNet. -response = requests.get("https://git.io/JJkYN") -labels = response.text.split("\n") - -def image_classifier(inp): - image = transform_224(inp) - print(image.shape) - #return model_finetuned(image.unsqueeze(0)) - with torch.no_grad(): - prediction = torch.nn.functional.softmax(model_finetuned(image.unsqueeze(0))[0], dim=0) - confidences = {labels[i]: float(prediction[i]) for i in range(1000)} - heatmap = generate_visualization(model_finetuned, image) - - prediction_orig = torch.nn.functional.softmax(model(image.unsqueeze(0))[0], dim=0) - confidences_orig = {labels[i]: float(prediction_orig[i]) for i in range(1000)} - heatmap_orig = generate_visualization(model, image) - return confidences, heatmap, confidences_orig, heatmap_orig - -def _load_model(model_name: str): - global model_finetuned, model - path = hf_hub_download('Hila/RobustViT', - f'{model_name}') - - model = vit(pretrained=True) - model.eval() - model_finetuned = vit() - checkpoint = torch.load(path, map_location='cpu') - model_finetuned.load_state_dict(checkpoint['state_dict']) - model_finetuned.eval() - -_load_model('ar_base.tar') - -def _set_example_image(example: list) -> dict: - return gr.Image.update(value=example[0]) - -def _clear_image(): - return None - -demo = gr.Blocks(css='style.css') - -with demo: - - - with gr.Row(): - with gr.Column(): - gr.Markdown('## [Optimizing Relevance Maps of Vision Transformers Improves Robustness](https://github.com/hila-chefer/RobustViT) - Official Demo') - # gr.Markdown('This is an official demo for [Optimizing Relevance Maps of Vision Transformers Improves Robustness](https://github.com/hila-chefer/RobustViT).') - gr.Markdown('Select or upload an image and then click **Submit** to see the output.') - with gr.Row(): - input_image = gr.Image(shape=(224,224)) - with gr.Row(): - btn = gr.Button("Submit", variant="primary") - clear_btn = gr.Button('Clear') - with gr.Column(): - gr.Markdown('### Examples') - gr.Markdown('#### Corrected Prediction') - with gr.Row(): - paths = sorted(pathlib.Path('samples/corrected').rglob('*.png')) - corrected_pred_examples = gr.Dataset(components=[input_image], headers=['header'], - samples=[[path.as_posix()] for path in paths]) - - gr.Markdown('#### Improved Explainability') - with gr.Row(): - paths = sorted(pathlib.Path('samples/better_expl').rglob('*.png')) - better_expl = gr.Dataset(components=[input_image], headers=['header'], - samples=[[path.as_posix()] for path in paths]) - - - #gr.Markdown('### Results:') - - with gr.Row(): - with gr.Column(): - gr.Markdown('### Ours (finetuned model)') - out1 = gr.outputs.Label(label="Our Classification", num_top_classes=3) - out2 = gr.Image(label="Our Relevance",shape=(224,224), elem_id="expl1") - - with gr.Column(): - gr.Markdown('### Original model') - out3 = gr.outputs.Label(label="Original Classification", num_top_classes=3) - out4 = gr.Image(label="Original Relevance",shape=(224,224),elem_id="expl2") - - - corrected_pred_examples.click(fn=_set_example_image, inputs=corrected_pred_examples, outputs=input_image) - better_expl.click(fn=_set_example_image, inputs=better_expl, outputs=input_image) - btn.click(fn=image_classifier, inputs=input_image, outputs=[out1, out2, out3, out4]) - clear_btn.click(fn=_clear_image, inputs=[], outputs=[input_image]) - - -demo.launch() - \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/distributed_fairseq_model.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/distributed_fairseq_model.py deleted file mode 100644 index 5eda2276404ca686be124901674ddfe36bd6dfd1..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/distributed_fairseq_model.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import signal -import threading - -import torch -import torch.nn as nn -from torch.nn.parallel import DistributedDataParallel - -from fairseq.distributed import ( - DistributedTimeoutWrapper, - LegacyDistributedDataParallel, - ModuleProxyWrapper, - TPUDistributedDataParallel, -) - - -logger = logging.getLogger(__name__) - - -_GOSSIP_DISABLED = False -try: - import gossip -except ImportError: - _GOSSIP_DISABLED = True - - -def DistributedFairseqModel(args, model, process_group, device): - """ - Wrap a *model* to support distributed data parallel training. - - This is similar to the built-in DistributedDataParallel, but allows - additional configuration of the DistributedDataParallel class to - use, and also provides easier access to the wrapped model by - forwarding requests for missing attributes to the wrapped model. - - Args: - args (argparse.Namespace): fairseq args - model (BaseFairseqModel): model to wrap - process_group: the c10d process group to be used for distributed data - parallel all-reduction. - device: device to move model to - """ - assert isinstance(model, nn.Module) - if args.tpu: - wrapped_model = TPUDistributedDataParallel( - module=model.to(device), - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"c10d", "pytorch_ddp"}: - wrapped_model = DistributedDataParallel( - module=model.to(device), - device_ids=[args.device_id], - output_device=args.device_id, - broadcast_buffers=args.broadcast_buffers, - bucket_cap_mb=args.bucket_cap_mb, - process_group=process_group, - find_unused_parameters=args.find_unused_parameters, - gradient_as_bucket_view=args.gradient_as_bucket_view, - ) - if args.ddp_comm_hook == "fp16": - logger.info("enable fp16 communication hook in DDP") - try: - from torch.distributed.algorithms.ddp_comm_hooks import ( - register_ddp_comm_hook, - DDPCommHookType, - ) - except: - logger.error( - "Could not import from torch.distributed.algorithms.ddp_comm_hooks; you may need to update your pytorch version" - ) - raise - - register_ddp_comm_hook(DDPCommHookType.FP16_COMPRESS, wrapped_model) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"no_c10d", "legacy_ddp"}: - wrapped_model = LegacyDistributedDataParallel( - module=model.to(device), - buffer_size=2 ** 28, - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "slow_mo": - if _GOSSIP_DISABLED: - raise ImportError( - "Cannot find gossip library. Please install from: " - "github.com/facebookresearch/stochastic_gradient_push" - ) - - # The values of slowmo_momentum below were obtained by tuning on the - # En-De 16 dataset by training the transformer_wmt_en_de_large model - if args.slowmo_momentum is None: - if args.distributed_world_size <= 16: - args.slowmo_momentum = 0.0 - elif args.distributed_world_size <= 32: - args.slowmo_momentum = 0.2 - elif args.distributed_world_size <= 64: - args.slowmo_momentum = 0.5 - else: - args.slowmo_momentum = 0.6 - - wrapped_model = gossip.GossipDataParallel( - module=model.to(device), - device_ids=[args.device_id], - output_device=args.device_id, - broadcast_buffers=args.broadcast_buffers, - nprocs_per_node=args.nprocs_per_node, - slowmo_momentum=args.slowmo_momentum, - localsgd=(args.slowmo_algorithm == "LocalSGD"), - localsgd_frequency=args.localsgd_frequency, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "fully_sharded": - try: - from fairscale.nn.data_parallel import FullyShardedDataParallel as FSDP - except ImportError: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - assert isinstance(model, FSDP), "expected model to already be wrapped in FSDP" - wrapped_model = model - if args.memory_efficient_fp16: - wrapped_model = wrapped_model.half() - if not args.cpu_offload: - wrapped_model = wrapped_model.to(device=device) - else: - raise ValueError("Unknown --ddp-backend: " + args.ddp_backend) - - # kill hung distributed jobs after a timeout - if getattr(args, "heartbeat_timeout", -1) > 0: - wrapped_model = DistributedTimeoutWrapper( - wrapped_model, timeout=getattr(args, "heartbeat_timeout", -1) - ) - - return wrapped_model diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/social_distancing.py b/spaces/Ibtehaj10/cheating-detection-FYP/social_distancing.py deleted file mode 100644 index 5e3d78e8d57d90154165119168a2a91f8ab450e1..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/social_distancing.py +++ /dev/null @@ -1,152 +0,0 @@ -import cv2 -import datetime -import imutils -import numpy as np -from centroidtracker import CentroidTracker -from itertools import combinations -import math - -protopath = "MobileNetSSD_deploy.prototxt" -modelpath = "MobileNetSSD_deploy.caffemodel" -detector = cv2.dnn.readNetFromCaffe(prototxt=protopath, caffeModel=modelpath) -# detector.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE) -# detector.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU) - - -CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", - "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", - "dog", "horse", "motorbike", "person", "pottedplant", "sheep", - "sofa", "train", "tvmonitor"] - -tracker = CentroidTracker(maxDisappeared=40, maxDistance=50) - - -def non_max_suppression_fast(boxes, overlapThresh): - try: - if len(boxes) == 0: - return [] - - if boxes.dtype.kind == "i": - boxes = boxes.astype("float") - - pick = [] - - x1 = boxes[:, 0] - y1 = boxes[:, 1] - x2 = boxes[:, 2] - y2 = boxes[:, 3] - - area = (x2 - x1 + 1) * (y2 - y1 + 1) - idxs = np.argsort(y2) - - while len(idxs) > 0: - last = len(idxs) - 1 - i = idxs[last] - pick.append(i) - - xx1 = np.maximum(x1[i], x1[idxs[:last]]) - yy1 = np.maximum(y1[i], y1[idxs[:last]]) - xx2 = np.minimum(x2[i], x2[idxs[:last]]) - yy2 = np.minimum(y2[i], y2[idxs[:last]]) - - w = np.maximum(0, xx2 - xx1 + 1) - h = np.maximum(0, yy2 - yy1 + 1) - - overlap = (w * h) / area[idxs[:last]] - - idxs = np.delete(idxs, np.concatenate(([last], - np.where(overlap > overlapThresh)[0]))) - - return boxes[pick].astype("int") - except Exception as e: - print("Exception occurred in non_max_suppression : {}".format(e)) - - -def main(): - cap = cv2.VideoCapture('testvideo2.mp4') - - fps_start_time = datetime.datetime.now() - fps = 0 - total_frames = 0 - - while True: - ret, frame = cap.read() - frame = imutils.resize(frame, width=600) - total_frames = total_frames + 1 - - (H, W) = frame.shape[:2] - - blob = cv2.dnn.blobFromImage(frame, 0.007843, (W, H), 127.5) - - detector.setInput(blob) - person_detections = detector.forward() - rects = [] - for i in np.arange(0, person_detections.shape[2]): - confidence = person_detections[0, 0, i, 2] - if confidence > 0.5: - idx = int(person_detections[0, 0, i, 1]) - - if CLASSES[idx] != "person": - continue - - person_box = person_detections[0, 0, i, 3:7] * np.array([W, H, W, H]) - (startX, startY, endX, endY) = person_box.astype("int") - rects.append(person_box) - - boundingboxes = np.array(rects) - boundingboxes = boundingboxes.astype(int) - rects = non_max_suppression_fast(boundingboxes, 0.3) - centroid_dict = dict() - objects = tracker.update(rects) - for (objectId, bbox) in objects.items(): - x1, y1, x2, y2 = bbox - x1 = int(x1) - y1 = int(y1) - x2 = int(x2) - y2 = int(y2) - cX = int((x1 + x2) / 2.0) - cY = int((y1 + y2) / 2.0) - - - centroid_dict[objectId] = (cX, cY, x1, y1, x2, y2) - - # text = "ID: {}".format(objectId) - # cv2.putText(frame, text, (x1, y1-5), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1) - - red_zone_list = [] - for (id1, p1), (id2, p2) in combinations(centroid_dict.items(), 2): - dx, dy = p1[0] - p2[0], p1[1] - p2[1] - distance = math.sqrt(dx * dx + dy * dy) - if distance < 75.0: - if id1 not in red_zone_list: - red_zone_list.append(id1) - if id2 not in red_zone_list: - red_zone_list.append(id2) - - for id, box in centroid_dict.items(): - if id in red_zone_list: - cv2.rectangle(frame, (box[2], box[3]), (box[4], box[5]), (0, 0, 255), 2) - else: - cv2.rectangle(frame, (box[2], box[3]), (box[4], box[5]), (0, 255, 0), 2) - - - fps_end_time = datetime.datetime.now() - time_diff = fps_end_time - fps_start_time - if time_diff.seconds == 0: - fps = 0.0 - else: - fps = (total_frames / time_diff.seconds) - - fps_text = "FPS: {:.2f}".format(fps) - - cv2.putText(frame, fps_text, (5, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1) - - cv2.imshow("Application", frame) - key = cv2.waitKey(1) - if key == ord('q'): - break - - cv2.destroyAllWindows() - - -main() diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/ops/dcn/deform_conv.py b/spaces/Iceclear/StableSR/StableSR/basicsr/ops/dcn/deform_conv.py deleted file mode 100644 index 6268ca825d59ef4a30d4d2156c4438cbbe9b3c1e..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/ops/dcn/deform_conv.py +++ /dev/null @@ -1,379 +0,0 @@ -import math -import os -import torch -from torch import nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn import functional as F -from torch.nn.modules.utils import _pair, _single - -BASICSR_JIT = os.getenv('BASICSR_JIT') -if BASICSR_JIT == 'True': - from torch.utils.cpp_extension import load - module_path = os.path.dirname(__file__) - deform_conv_ext = load( - 'deform_conv', - sources=[ - os.path.join(module_path, 'src', 'deform_conv_ext.cpp'), - os.path.join(module_path, 'src', 'deform_conv_cuda.cpp'), - os.path.join(module_path, 'src', 'deform_conv_cuda_kernel.cu'), - ], - ) -else: - try: - from . import deform_conv_ext - except ImportError: - pass - # avoid annoying print output - # print(f'Cannot import deform_conv_ext. Error: {error}. You may need to: \n ' - # '1. compile with BASICSR_EXT=True. or\n ' - # '2. set BASICSR_JIT=True during running') - - -class DeformConvFunction(Function): - - @staticmethod - def forward(ctx, - input, - offset, - weight, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - im2col_step=64): - if input is not None and input.dim() != 4: - raise ValueError(f'Expected 4D tensor as input, got {input.dim()}D tensor instead.') - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.im2col_step = im2col_step - - ctx.save_for_backward(input, offset, weight) - - output = input.new_empty(DeformConvFunction._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride)) - - ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones - - if not input.is_cuda: - raise NotImplementedError - else: - cur_im2col_step = min(ctx.im2col_step, input.shape[0]) - assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize' - deform_conv_ext.deform_conv_forward(input, weight, - offset, output, ctx.bufs_[0], ctx.bufs_[1], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1], - ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups, - ctx.deformable_groups, cur_im2col_step) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, weight = ctx.saved_tensors - - grad_input = grad_offset = grad_weight = None - - if not grad_output.is_cuda: - raise NotImplementedError - else: - cur_im2col_step = min(ctx.im2col_step, input.shape[0]) - assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize' - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - deform_conv_ext.deform_conv_backward_input(input, offset, grad_output, grad_input, - grad_offset, weight, ctx.bufs_[0], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1], - ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups, - ctx.deformable_groups, cur_im2col_step) - - if ctx.needs_input_grad[2]: - grad_weight = torch.zeros_like(weight) - deform_conv_ext.deform_conv_backward_parameters(input, offset, grad_output, grad_weight, - ctx.bufs_[0], ctx.bufs_[1], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], - ctx.padding[1], ctx.padding[0], ctx.dilation[1], - ctx.dilation[0], ctx.groups, ctx.deformable_groups, 1, - cur_im2col_step) - - return (grad_input, grad_offset, grad_weight, None, None, None, None, None) - - @staticmethod - def _output_size(input, weight, padding, dilation, stride): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = padding[d] - kernel = dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError(f'convolution input is too small (output would be {"x".join(map(str, output_size))})') - return output_size - - -class ModulatedDeformConvFunction(Function): - - @staticmethod - def forward(ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1): - ctx.stride = stride - ctx.padding = padding - ctx.dilation = dilation - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(1) # fake tensor - if not input.is_cuda: - raise NotImplementedError - if weight.requires_grad or mask.requires_grad or offset.requires_grad or input.requires_grad: - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty(ModulatedDeformConvFunction._infer_shape(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - deform_conv_ext.modulated_deform_conv_forward(input, weight, bias, ctx._bufs[0], offset, mask, output, - ctx._bufs[1], weight.shape[2], weight.shape[3], ctx.stride, - ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation, - ctx.groups, ctx.deformable_groups, ctx.with_bias) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - if not grad_output.is_cuda: - raise NotImplementedError - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - deform_conv_ext.modulated_deform_conv_backward(input, weight, bias, ctx._bufs[0], offset, mask, ctx._bufs[1], - grad_input, grad_weight, grad_bias, grad_offset, grad_mask, - grad_output, weight.shape[2], weight.shape[3], ctx.stride, - ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation, - ctx.groups, ctx.deformable_groups, ctx.with_bias) - if not ctx.with_bias: - grad_bias = None - - return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, None, None, None, None, None) - - @staticmethod - def _infer_shape(ctx, input, weight): - n = input.size(0) - channels_out = weight.size(0) - height, width = input.shape[2:4] - kernel_h, kernel_w = weight.shape[2:4] - height_out = (height + 2 * ctx.padding - (ctx.dilation * (kernel_h - 1) + 1)) // ctx.stride + 1 - width_out = (width + 2 * ctx.padding - (ctx.dilation * (kernel_w - 1) + 1)) // ctx.stride + 1 - return n, channels_out, height_out, width_out - - -deform_conv = DeformConvFunction.apply -modulated_deform_conv = ModulatedDeformConvFunction.apply - - -class DeformConv(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=False): - super(DeformConv, self).__init__() - - assert not bias - assert in_channels % groups == 0, f'in_channels {in_channels} is not divisible by groups {groups}' - assert out_channels % groups == 0, f'out_channels {out_channels} is not divisible by groups {groups}' - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deformable_groups = deformable_groups - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // self.groups, *self.kernel_size)) - - self.reset_parameters() - - def reset_parameters(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - - def forward(self, x, offset): - # To fix an assert error in deform_conv_cuda.cpp:128 - # input image is smaller than kernel - input_pad = (x.size(2) < self.kernel_size[0] or x.size(3) < self.kernel_size[1]) - if input_pad: - pad_h = max(self.kernel_size[0] - x.size(2), 0) - pad_w = max(self.kernel_size[1] - x.size(3), 0) - x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - out = deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups, - self.deformable_groups) - if input_pad: - out = out[:, :, :out.size(2) - pad_h, :out.size(3) - pad_w].contiguous() - return out - - -class DeformConvPack(DeformConv): - """A Deformable Conv Encapsulation that acts as normal Conv layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(DeformConvPack, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deformable_groups * 2 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_offset() - - def init_offset(self): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - offset = self.conv_offset(x) - return deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups, - self.deformable_groups) - - -class ModulatedDeformConv(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=True): - super(ModulatedDeformConv, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.deformable_groups = deformable_groups - self.with_bias = bias - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter('bias', None) - self.init_weights() - - def init_weights(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - if self.bias is not None: - self.bias.data.zero_() - - def forward(self, x, offset, mask): - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - self.groups, self.deformable_groups) - - -class ModulatedDeformConvPack(ModulatedDeformConv): - """A ModulatedDeformable Conv Encapsulation that acts as normal Conv layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(ModulatedDeformConvPack, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deformable_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_weights() - - def init_weights(self): - super(ModulatedDeformConvPack, self).init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - out = self.conv_offset(x) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - self.groups, self.deformable_groups) diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/commons.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/commons.py deleted file mode 100644 index db17cf0914ba6e445fe613e3ec3411b3a74b28aa..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/commons.py +++ /dev/null @@ -1,164 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - try: - ret[i] = x[i, :, idx_str:idx_end] - except RuntimeError: - print("?") - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/linter.sh b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/linter.sh deleted file mode 100644 index df2e17436d30e89ff1728109301599f425f1ad6b..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/linter.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. - -{ - black --version | grep -E "23\." > /dev/null -} || { - echo "Linter requires 'black==23.*' !" - exit 1 -} - -ISORT_VERSION=$(isort --version-number) -if [[ "$ISORT_VERSION" != 5.12* ]]; then - echo "Linter requires isort==5.12.0 !" - exit 1 -fi - -echo "Running isort ..." -isort . --atomic - -echo "Running black ..." -black -l 100 . - -echo "Running flake8 ..." -if [ -x "$(command -v flake8)" ]; then - flake8 . -else - python3 -m flake8 . -fi - -echo "Running mypy..." - -mypy --exclude 'setup.py|notebooks' . diff --git a/spaces/Iqbalzz/hololive-rvc-models/infer_pack/models.py b/spaces/Iqbalzz/hololive-rvc-models/infer_pack/models.py deleted file mode 100644 index 5e4b2e72383efaee1fae4f5c42e3db2c627e4190..0000000000000000000000000000000000000000 --- a/spaces/Iqbalzz/hololive-rvc-models/infer_pack/models.py +++ /dev/null @@ -1,1124 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/JUNGU/VToonify/vtoonify/model/encoder/__init__.py b/spaces/JUNGU/VToonify/vtoonify/model/encoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Jamos1/AI_gamer89-insta/app.py b/spaces/Jamos1/AI_gamer89-insta/app.py deleted file mode 100644 index c3b950d79209e5e4b903442a861cc89227c1448e..0000000000000000000000000000000000000000 --- a/spaces/Jamos1/AI_gamer89-insta/app.py +++ /dev/null @@ -1,93 +0,0 @@ -import gradio as gr -import whisper -from pytube import YouTube - - -class GradioInference(): - def __init__(self): - self.sizes = list(whisper._MODELS.keys()) - self.langs = ["none"] + sorted(list(whisper.tokenizer.LANGUAGES.values())) - self.current_size = "base" - self.loaded_model = whisper.load_model(self.current_size) - self.yt = None - - def __call__(self, link, lang, size, subs): - if self.yt is None: - self.yt = YouTube(link) - path = self.yt.streams.filter(only_audio=True)[0].download(filename="tmp.mp4") - - if lang == "none": - lang = None - - if size != self.current_size: - self.loaded_model = whisper.load_model(size) - self.current_size = size - results = self.loaded_model.transcribe(path, language=lang) - - if subs == "None": - return results["text"] - elif subs == ".srt": - return self.srt(results["segments"]) - elif ".csv" == ".csv": - return self.csv(results["segments"]) - - def srt(self, segments): - output = "" - for i, segment in enumerate(segments): - output += f"{i+1}\n" - output += f"{self.format_time(segment['start'])} --> {self.format_time(segment['end'])}\n" - output += f"{segment['text']}\n\n" - return output - - def csv(self, segments): - output = "" - for segment in segments: - output += f"{segment['start']},{segment['end']},{segment['text']}\n" - return output - - def format_time(self, time): - hours = time//3600 - minutes = (time - hours*3600)//60 - seconds = time - hours*3600 - minutes*60 - milliseconds = (time - int(time))*1000 - return f"{int(hours):02d}:{int(minutes):02d}:{int(seconds):02d},{int(milliseconds):03d}" - - def populate_metadata(self, link): - self.yt = YouTube(link) - return self.yt.thumbnail_url, self.yt.title - -gio = GradioInference() -title="Youtube Whisperer" -description="Speech to text transcription of Youtube videos using OpenAI's Whisper" - -block = gr.Blocks() -with block: - gr.HTML( - """ -
      -
      -

      Youtube Whisperer

      -
      -

      - Speech to text transcription of Youtube videos using OpenAI's Whisper -

      -
      - """ - ) - with gr.Group(): - with gr.Box(): - with gr.Row().style(equal_height=True): - sz = gr.Dropdown(label="Model Size", choices=gio.sizes, value='base') - lang = gr.Dropdown(label="Language (Optional)", choices=gio.langs, value="none") - with gr.Row().style(equal_height=True): - wt = gr.Radio(["None", ".srt", ".csv"], label="With Timestamps?") - link = gr.Textbox(label="YouTube Link") - title = gr.Label(label="Video Title") - with gr.Row().style(equal_height=True): - img = gr.Image(label="Thumbnail") - text = gr.Textbox(label="Transcription", placeholder="Transcription Output", lines=10) - with gr.Row().style(equal_height=True): - btn = gr.Button("Transcribe") - btn.click(gio, inputs=[link, lang, sz, wt], outputs=[text]) - link.change(gio.populate_metadata, inputs=[link], outputs=[img, title]) -block.launch() \ No newline at end of file diff --git a/spaces/JeffJing/ZookChatBot/steamship/cli/manifest_init_wizard.py b/spaces/JeffJing/ZookChatBot/steamship/cli/manifest_init_wizard.py deleted file mode 100644 index a71dc52c4350509ca722a5c1d5f95ad140cecc20..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/cli/manifest_init_wizard.py +++ /dev/null @@ -1,96 +0,0 @@ -import re - -import click -from click import BadParameter - -from steamship import Steamship -from steamship.data.manifest import Manifest, PluginConfig, SteamshipRegistry -from steamship.data.user import User - - -def validate_handle(handle: str) -> str: - if re.fullmatch(r"[a-z\-]+", handle) is not None: - return handle - else: - raise BadParameter("Handle must only include lowercase letters and -") - - -def validate_version_handle(handle: str) -> str: - if re.fullmatch(r"[a-z0-9\-.]+", handle) is not None: - return handle - else: - raise BadParameter("Handle must only include lowercase letters, numbers, . and -") - - -def manifest_init_wizard(client: Steamship): - click.secho( - "It looks like you don't yet have a steamship.json to deploy. Let's create one.", - fg="cyan", - ) - - deployable_type = click.prompt( - "Is this a package or a plugin?", - default="package", - type=click.Choice(["package", "plugin"]), - show_choices=False, - ) - - handle = click.prompt( - f"What handle would you like to use for your {deployable_type}? Valid characters are a-z and -", - value_proc=validate_handle, - ) - - # TODO: claim the handle right here! - - version_handle = "0.0.1" - - plugin_detail = None - if deployable_type == "plugin": - plugin_type = click.prompt( - "What type of plugin is this?", - default="tagger", - type=click.Choice( - ["tagger", "blockifier", "exporter", "fileImporter", "corpusImporter", "generator"] - ), - show_choices=True, - ) - if plugin_type == "tagger": - trainable = click.confirm("Is the plugin trainable?", default=False) - else: - trainable = False - plugin_detail = PluginConfig(isTrainable=trainable, type=plugin_type) - - public = click.confirm(f"Do you want this {deployable_type} to be public?", default=True) - - user = User.current(client) - - author = click.prompt("How should we list your author name?", default=user.handle) - - tagline = None - author_github = None - if public: - tagline = click.prompt(f"Want to give the {deployable_type} a tagline?", default="") - author_github = click.prompt( - "If you'd like this associated with your github account, please your github username", - default="", - ) - - tag_string = click.prompt( - f"Want to give the {deployable_type} some tags? (comma separated)", default="Prompt API" - ) - tags = [tag.strip() for tag in tag_string.split(",")] - - return Manifest( - type=deployable_type, - handle=handle, - version=version_handle, - description="", - author=author, - public=public, - plugin=plugin_detail, - build_config={"ignore": ["tests", "examples"]}, - configTemplate={}, - steamshipRegistry=SteamshipRegistry( - tagline=tagline, authorGithub=author_github, authorName=author, tags=tags - ), - ) diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/models/azure.py b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/models/azure.py deleted file mode 100644 index 42cddfbda8cc74e40e114ee4bed46a2f9ff74ce9..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/models/azure.py +++ /dev/null @@ -1,17 +0,0 @@ -from langchain.chat_models import AzureChatOpenAI -import os - -from .base_model import Base_Chat_Langchain_Client - -# load_config_to_environ(["azure_openai_api_key", "azure_api_base_url", "azure_openai_api_version", "azure_deployment_name"]) - -class Azure_OpenAI_Client(Base_Chat_Langchain_Client): - def setup_model(self): - # inplement this to setup the model then return it - return AzureChatOpenAI( - openai_api_base=os.environ["AZURE_OPENAI_API_BASE_URL"], - openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"], - deployment_name=os.environ["AZURE_DEPLOYMENT_NAME"], - openai_api_key=os.environ["AZURE_OPENAI_API_KEY"], - openai_api_type="azure", - ) \ No newline at end of file diff --git a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py b/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py deleted file mode 100644 index 69b6d1c4b5724a3ef61f8bc3d64fc45c5e51e270..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py +++ /dev/null @@ -1,196 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - #unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - unnormalized_derivatives_ = torch.zeros((1, 1, unnormalized_derivatives.size(2), unnormalized_derivatives.size(3)+2)) - unnormalized_derivatives_[...,1:-1] = unnormalized_derivatives - unnormalized_derivatives = unnormalized_derivatives_ - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Justin-Choo/Multi_diffuser-quick-diffusion-CN-ZH/README.md b/spaces/Justin-Choo/Multi_diffuser-quick-diffusion-CN-ZH/README.md deleted file mode 100644 index cadaf9d42fc9b5b1260e9b99e815064a95e99854..0000000000000000000000000000000000000000 --- a/spaces/Justin-Choo/Multi_diffuser-quick-diffusion-CN-ZH/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Quick Diffusion Multi-diffusers -emoji: 🎩 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: I-am-Justin.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/KPCGD/bingo/next.config.js b/spaces/KPCGD/bingo/next.config.js deleted file mode 100644 index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/next.config.js +++ /dev/null @@ -1,38 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - // output: 'export', - // assetPrefix: '.', - webpack: (config, { isServer }) => { - if (!isServer) { - config.resolve = { - ...config.resolve, - fallback: { - 'bufferutil': false, - 'utf-8-validate': false, - http: false, - https: false, - stream: false, - // fixes proxy-agent dependencies - net: false, - dns: false, - tls: false, - assert: false, - // fixes next-i18next dependencies - path: false, - fs: false, - // fixes mapbox dependencies - events: false, - // fixes sentry dependencies - process: false - } - }; - } - config.module.exprContextCritical = false; - - return config; - }, -} - -module.exports = (...args) => { - return nextConfig -} diff --git a/spaces/Kayson/InstructDiffusion/dataset/editing/edit_zip_dataset.py b/spaces/Kayson/InstructDiffusion/dataset/editing/edit_zip_dataset.py deleted file mode 100644 index 0d87467c24dee8175bf40b134e786884175b1e7d..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/dataset/editing/edit_zip_dataset.py +++ /dev/null @@ -1,494 +0,0 @@ -# -------------------------------------------------------- -# InstructDiffusion -# Based on instruct-pix2pix (https://github.com/timothybrooks/instruct-pix2pix) -# Modified by Tiankai Hang (tkhang@seu.edu.cn) -# -------------------------------------------------------- - -from __future__ import annotations - -import os -import json -import math -from pathlib import Path -from typing import Any - -import numpy as np -import torch -import torchvision -from einops import rearrange -import PIL -from PIL import Image -from torch.utils.data import Dataset -from tqdm.auto import tqdm - -import random - -from dataset.utils.zip_manager import MultipleZipManager - - -if hasattr(Image, "Resampling"): - # deprecated in pillow >= 10.0.0 - RESAMPLING_METHOD = Image.Resampling.LANCZOS -else: - RESAMPLING_METHOD = Image.LANCZOS - - -class FilteredIP2PDataset(Dataset): - def __init__( - self, - path: str, - split: str = "train", - splits: tuple[float, float, float] = (0.9, 0.05, 0.05), - min_resize_res: int = 256, - max_resize_res: int = 256, - crop_res: int = 256, - flip_prob: float = 0.0, - zip_start_index: int = 0, - zip_end_index: int = 30, - instruct: bool = False, - max_num_images = None, - sample_weight: float = 1.0, - reverse_version: bool = False, - **kwargs - ): - assert split in ("train", "val", "test") - assert sum(splits) == 1 - self.path = path - self.min_resize_res = min_resize_res - self.max_resize_res = max_resize_res - self.crop_res = crop_res - self.flip_prob = flip_prob - self.instruct = instruct - - zip_list = [] - for i in range(zip_start_index, zip_end_index): - name = "shard-"+str(i).zfill(2)+'.zip' - zip_list.append(os.path.join(self.path, name)) - - self.image_dataset = MultipleZipManager(zip_list, 'image', sync=True) # sync=True is faster - - with open(Path(self.path, "seeds.json")) as f: - self.seeds = json.load(f) - - split_0, split_1 = { - "train": (0.0, splits[0]), - "val": (splits[0], splits[0] + splits[1]), - "test": (splits[0] + splits[1], 1.0), - }[split] - - idx_0 = math.floor(split_0 * len(self.seeds)) - idx_1 = math.floor(split_1 * len(self.seeds)) - self.seeds = self.seeds[idx_0:idx_1] - - if max_num_images is not None and max_num_images > 0: - self.seeds = self.seeds[:min(max_num_images, len(self.seeds))] - - # flatten seeds - self.seeds = [(name, seed) for name, seeds in self.seeds for seed in seeds] - self.sample_weight = sample_weight - - while True: - try: - with open('filtered_ids_ip2p.json') as json_file: - filtered_ids = json.load(json_file) - break - except: - # download json file from url - if reverse_version: - os.system('wget https://github.com/TiankaiHang/storage/releases/download/readout/filtered_ids_ip2p.json') - else: - os.system("wget https://github.com/TiankaiHang/storage/releases/download/readout/filtered-ip2p-thres5.5-0.5.json -O filtered_ids_ip2p.json") - - print("seeds:", len(self.seeds)) - # self.seeds = [seed for seed in self.seeds if seed[1] in filtered_ids] - # faster - # self.seeds = list(filter(lambda seed: seed[1] in filtered_ids, self.seeds)) - # to numpy and faster in parallel - # import pdb; pdb.set_trace() - _seeds = [f"{a}/{b}" for a, b in self.seeds] - self.seeds = np.array(self.seeds) - _seeds = np.array(_seeds) - self.seeds = self.seeds[np.isin(_seeds, filtered_ids)] - self.seeds = self.seeds.tolist() - - self.return_add_kwargs = kwargs.get("return_add_kwargs", False) - - def __len__(self) -> int: - return int(len(self.seeds) * self.sample_weight) - - def __getitem__(self, i: int) -> dict[str, Any]: - # name, seeds = self.seeds[i] - if self.sample_weight >= 1: - i = i % len(self.seeds) - else: - remainder = math.ceil(i / self.sample_weight - int(i / self.sample_weight)) - i = int(i / self.sample_weight) + random.randint(0, int(1 / self.sample_weight) - 1 + remainder) - - name, seed = self.seeds[i] - propt_name = name + "/prompt.json" - if not self.image_dataset.managers[self.image_dataset.mapping[propt_name]]._init: - self.image_dataset.managers[self.image_dataset.mapping[propt_name]].initialize(close=False) - # propt_name = name + "/prompt.json" - byteflow = self.image_dataset.managers[self.image_dataset.mapping[propt_name]].zip_fd.read(propt_name) - texts = json.loads(byteflow.decode('utf-8')) - prompt = texts["edit"] - if self.instruct: - prompt = "Image Editing: " + prompt - - text_input = texts["input"] - text_output = texts["output"] - - # image_0 = Image.open(propt_dir.joinpath(f"{seed}_0.jpg")) - # image_1 = Image.open(propt_dir.joinpath(f"{seed}_1.jpg")) - image_0 = self.image_dataset.get(name+f"/{seed}_0.jpg") - image_1 = self.image_dataset.get(name+f"/{seed}_1.jpg") - - reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item() - image_0 = image_0.resize((reize_res, reize_res), RESAMPLING_METHOD) - image_1 = image_1.resize((reize_res, reize_res), RESAMPLING_METHOD) - - image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w") - image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w") - - crop = torchvision.transforms.RandomCrop(self.crop_res) - flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob)) - image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2) - - if self.return_add_kwargs: - add_kwargs = dict( - name=name, - seed=seed, - text_input=text_input, - text_output=text_output, - ) - else: - add_kwargs = {} - - return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt), **add_kwargs) - - -class GIERDataset(Dataset): - def __init__( - self, - path: str, - split: str = "train", - splits: tuple[float, float, float] = (0.9, 0.05, 0.05), - min_resize_res: int = 256, - max_resize_res: int = 256, - crop_res: int = 256, - flip_prob: float = 0.0, - zip_start_index: int = 0, - zip_end_index: int = 30, - sample_weight: float = 1.0, - instruct: bool = False, - ): - assert split in ("train", "val", "test") - assert sum(splits) == 1 - self.path = path - self.min_resize_res = min_resize_res - self.max_resize_res = max_resize_res - self.crop_res = crop_res - self.flip_prob = flip_prob - self.instruct = instruct - - # self.meta = torch.load(Path(self.path, "GIER.json"), map_location="cpu") - # load json file - with open(Path(self.path, "GIER_new.json")) as json_file: - self.meta = json.load(json_file) - - print(f"||||||||||||||||||||||||||||| \n Loaded {len(self.meta)} images from json file") - - input_does_not_exist = [] - output_does_not_exist = [] - # filter out out images that do not exist - if not os.path.exists(os.path.join(self.path, "filtered_meta_new.pt")): - filtered_meta = [] - for i in tqdm(range(len(self.meta))): - input_path = os.path.join(self.path, "warped", self.meta[i]["input"]) - output_path = os.path.join(self.path, "warped", self.meta[i]["output"]) - - if not os.path.exists(input_path): - input_path = os.path.join(self.path, "images", self.meta[i]["input"]) - if not os.path.exists(input_path): - input_does_not_exist.append(input_path) - - if not os.path.exists(output_path): - output_path = os.path.join(self.path, "images", self.meta[i]["output"]) - if not os.path.exists(output_path): - output_does_not_exist.append(output_path) - - if os.path.exists(input_path) and os.path.exists(output_path): - filtered_meta.append( - dict( - input=input_path, - output=output_path, - prompts=self.meta[i]["prompts"], - ) - ) - else: - print(f"\n {input_path} or {output_path} does not exist") - torch.save(filtered_meta, os.path.join(self.path, "filtered_meta_new.pt")) - else: - filtered_meta = torch.load(os.path.join(self.path, "filtered_meta_new.pt"), map_location="cpu") - - self.meta = filtered_meta - print(f"||||||||||||||||||||||||||||| \n Filtered {len(self.meta)} images") - for i in range(len(self.meta)): - self.meta[i]['input'] = self.meta[i]['input'].replace('/mnt/external/datasets/GIER_editing_data/', self.path) - self.meta[i]['output'] = self.meta[i]['output'].replace('/mnt/external/datasets/GIER_editing_data/', self.path) - - # write input_does_not_exist and output_does_not_exist to file - with open(Path(self.path, f"input_does_not_exist.txt"), "w") as f: - for item in input_does_not_exist: - f.write("%s\n" % item) - with open(Path(self.path, f"output_does_not_exist.txt"), "w") as f: - for item in output_does_not_exist: - f.write("%s\n" % item) - - split_0, split_1 = { - "train": (0.0, splits[0]), - "val": (splits[0], splits[0] + splits[1]), - "test": (splits[0] + splits[1], 1.0), - }[split] - - idx_0 = math.floor(split_0 * len(self.meta)) - idx_1 = math.floor(split_1 * len(self.meta)) - - self.meta = self.meta[idx_0:idx_1] - self.sample_weight = sample_weight - print('original GIER', len(self.meta)) - - def __len__(self) -> int: - return int(len(self.meta) * self.sample_weight) - - def __getitem__(self, i: int) -> dict[str, Any]: - if self.sample_weight >= 1: - i = i % len(self.meta) - else: - i = int(i / self.sample_weight) + random.randint(0, int(1 / self.sample_weight) - 1) - - # prompt = self.meta[i]["prompts"] - prompt = random.choice(self.meta[i]["prompts"]) - try: - image_0 = Image.open(self.meta[i]["input"]).convert("RGB") - image_1 = Image.open(self.meta[i]["output"]).convert("RGB") - except PIL.UnidentifiedImageError: - print(f"\n {self.meta[i]['input']} or {self.meta[i]['output']} is not a valid image") - i = random.randint(0, len(self.meta) - 1) - return self.__getitem__(i) - - reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item() - image_0 = image_0.resize((reize_res, reize_res), RESAMPLING_METHOD) - image_1 = image_1.resize((reize_res, reize_res), RESAMPLING_METHOD) - - image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w") - image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w") - - crop = torchvision.transforms.RandomCrop(self.crop_res) - flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob)) - image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2) - - if self.instruct: - prompt = "Image Editing: " + prompt - - return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt)) - - -class GQAInpaintDataset(Dataset): - r""" - shoud download and unzip the data first - - ``` - mkdir -p ../datasets - cd ../datasets - - # if file exists, then skip - if [ ! -f "gqa-inpaint.zip" ]; then - sudo azcopy copy "https://bingdatawu2.blob.core.windows.net/genrecog/private/t-thang/gqa-inpaint.zip${TOKEN}" . - unzip gqa-inpaint.zip -d gqa-inpaint > /dev/null - fi - - if [ ! -f "images.zip" ]; then - sudo azcopy copy "https://bingdatawu2.blob.core.windows.net/genrecog/private/t-thang/images.zip${TOKEN}" . - unzip images.zip > /dev/null - fi - ``` - - """ - def __init__(self, **kwargs): - # load from json ../datasets/gqa-inpaint/meta_info.json - self.path = kwargs.get("path", "../datasets/gqa-inpaint") - self.instruct = kwargs.get("instruct", False) - with open(self.path + "/meta_info.json", "r") as f: - self.meta_info = json.load(f) - - self.min_resize_res = kwargs.get("min_resize_res", 256) - self.max_resize_res = kwargs.get("max_resize_res", 256) - self.crop_res = kwargs.get("crop_res", 256) - - self.flip_prob = kwargs.get("flip_prob", 0.5) - - def __len__(self): - return len(self.meta_info) - - def __getitem__(self, i): - item = self.meta_info[i] - src_img = Image.open(item["source_image_path"].replace("../datasets", self.path)).convert("RGB") - tgt_img = Image.open(item["target_image_path"].replace("../datasets/gqa-inpaint", self.path)).convert("RGB") - - image_0 = src_img - image_1 = tgt_img - - reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item() - image_0 = image_0.resize((reize_res, reize_res), RESAMPLING_METHOD) - image_1 = image_1.resize((reize_res, reize_res), RESAMPLING_METHOD) - instruction = item["instruction"] - if self.instruct: - instruction = "Image Editing: " + instruction - # return image_0, image_1, instruction - - image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w") - image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w") - - crop = torchvision.transforms.RandomCrop(self.crop_res) - flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob)) - image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2) - - return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=instruction)) - - -class MagicBrushDataset(Dataset): - def __init__( - self, - path: str, - split: str = "train", - splits: tuple[float, float, float] = (0.9, 0.05, 0.05), - min_resize_res: int = 256, - max_resize_res: int = 256, - crop_res: int = 256, - flip_prob: float = 0.0, - zip_start_index: int = 0, - zip_end_index: int = 30, - len_dataset: int = -1, - instruct: bool = False, - sample_weight: float = 1.0, - ): - assert split in ("train", "val", "test") - assert sum(splits) == 1 - self.path = path - self.min_resize_res = min_resize_res - self.max_resize_res = max_resize_res - self.crop_res = crop_res - self.flip_prob = flip_prob - self.instruct = instruct - self.sample_weight = sample_weight - - self.meta_path = os.path.join(self.path, "magic_train.json") - with open(self.meta_path, "r") as f: - self.meta = json.load(f) - - def __len__(self) -> int: - return int(len(self.meta) * self.sample_weight) - - def __getitem__(self, i: int) -> dict[str, Any]: - if self.sample_weight >= 1: - i = i % len(self.meta) - else: - i = int(i / self.sample_weight) + random.randint(0, int(1 / self.sample_weight) - 1) - - item = self.meta[i] - try: - image_0 = Image.open(os.path.join(self.path, item["input"])).convert("RGB") - image_1 = Image.open(os.path.join(self.path, item["edited"])).convert("RGB") - except (PIL.UnidentifiedImageError, FileNotFoundError): - print(f"\n {self.path}/{item['input']} or {self.path}/{item['edited']} is not a valid image") - i = random.randint(0, len(self.meta) - 1) - return self.__getitem__(i) - prompt = item["instruction"] - - reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item() - image_0 = image_0.resize((reize_res, reize_res), RESAMPLING_METHOD) - image_1 = image_1.resize((reize_res, reize_res), RESAMPLING_METHOD) - - if self.instruct: - prompt = "Image Editing: " + prompt - # return image_0, image_1, prompt - - image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w") - image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w") - - crop = torchvision.transforms.RandomCrop(self.crop_res) - flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob)) - image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2) - - return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt)) - - -class IEIWDataset(Dataset): - def __init__( - self, - path: str, - split: str = "train", - splits: tuple[float, float, float] = (0.9, 0.05, 0.05), - min_resize_res: int = 256, - max_resize_res: int = 256, - crop_res: int = 256, - flip_prob: float = 0.0, - zip_start_index: int = 0, - zip_end_index: int = 30, - sample_weight: float = 1.0, - instruct: bool = False, - ): - assert split in ("train", "val", "test") - assert sum(splits) == 1 - self.path = path - self.min_resize_res = min_resize_res - self.max_resize_res = max_resize_res - self.crop_res = crop_res - self.flip_prob = flip_prob - self.instruct = instruct - - self.meta_path = os.path.join(self.path, "meta_infov1.json") - with open(self.meta_path, "r") as f: - self.meta = json.load(f) - self.sample_weight = sample_weight - print('original synthetic', len(self.meta)) - - def __len__(self) -> int: - return int(len(self.meta) * self.sample_weight) - - def __getitem__(self, i: int) -> dict[str, Any]: - if self.sample_weight >= 1: - i = i % len(self.meta) - else: - i = int(i / self.sample_weight) + random.randint(0, int(1 / self.sample_weight) - 1) - - item = self.meta[i] - item['input'] = item['input'].replace('/mnt/external/tmp/2023/06/11/', self.path) - item['edited'] = item['edited'].replace('/mnt/external/tmp/2023/06/11/', self.path) - try: - image_0 = Image.open(item["input"]).convert("RGB") - image_1 = Image.open(item["edited"]).convert("RGB") - except (PIL.UnidentifiedImageError, FileNotFoundError): - print(f"\n {item['input']} or {item['edited']} is not a valid image") - i = random.randint(0, len(self.meta) - 1) - return self.__getitem__(i) - prompt = item["instruction"] - - reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item() - image_0 = image_0.resize((reize_res, reize_res), RESAMPLING_METHOD) - image_1 = image_1.resize((reize_res, reize_res), RESAMPLING_METHOD) - if self.instruct: - prompt = "Image Editing: " + prompt - # return image_0, image_1, prompt - - image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w") - image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w") - - crop = torchvision.transforms.RandomCrop(self.crop_res) - flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob)) - image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2) - - return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt)) - - diff --git a/spaces/Kayson/InstructDiffusion/scripts/download_pretrained_sd.sh b/spaces/Kayson/InstructDiffusion/scripts/download_pretrained_sd.sh deleted file mode 100644 index 189105fecca79403ebb6439368e65dc00b6321ab..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/scripts/download_pretrained_sd.sh +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash - -SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) - -mkdir -p $SCRIPT_DIR/../stable_diffusion/models/ldm/stable-diffusion-v1 -curl -L https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -o $SCRIPT_DIR/../stable_diffusion/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt -curl -L https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt -o $SCRIPT_DIR/../stable_diffusion/models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/components/types.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/components/types.py deleted file mode 100644 index 125809a81b306ddeab4cf6ab0ba6abdbe8d0c4ed..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/components/types.py +++ /dev/null @@ -1,46 +0,0 @@ -import base64 -from typing import Any, Dict, overload - - -class FileContent(str): - def as_bytes(self) -> bytes: - return base64.b64decode(self, validate=True) - - def as_str(self) -> str: - return self.as_bytes().decode() - - @classmethod - def __modify_schema__(cls, field_schema: Dict[str, Any]) -> None: - field_schema.update(format="byte") - - @classmethod - def __get_validators__(cls) -> Any: # type: ignore - yield cls.validate - - @classmethod - def validate(cls, value: Any) -> "FileContent": - if isinstance(value, FileContent): - return value - elif isinstance(value, str): - return FileContent(value) - elif isinstance(value, (bytes, bytearray, memoryview)): - return FileContent(base64.b64encode(value).decode()) - else: - raise Exception("Wrong type") - -# # 暂时无法使用,因为浏览器中没有考虑选择文件夹 -# class DirectoryContent(FileContent): -# @classmethod -# def __modify_schema__(cls, field_schema: Dict[str, Any]) -> None: -# field_schema.update(format="path") - -# @classmethod -# def validate(cls, value: Any) -> "DirectoryContent": -# if isinstance(value, DirectoryContent): -# return value -# elif isinstance(value, str): -# return DirectoryContent(value) -# elif isinstance(value, (bytes, bytearray, memoryview)): -# return DirectoryContent(base64.b64encode(value).decode()) -# else: -# raise Exception("Wrong type") diff --git a/spaces/Kirihasan/rvc-jjjo/infer_pack/commons.py b/spaces/Kirihasan/rvc-jjjo/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Kirihasan/rvc-jjjo/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/Lamai/LAMAIGPT/autogpt/app.py b/spaces/Lamai/LAMAIGPT/autogpt/app.py deleted file mode 100644 index 58d9f7164ddfbb5019b072d789dc2fa6205dc9d3..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/autogpt/app.py +++ /dev/null @@ -1,330 +0,0 @@ -""" Command and Control """ -import json -from typing import Dict, List, NoReturn, Union - -from autogpt.agent.agent_manager import AgentManager -from autogpt.commands.analyze_code import analyze_code -from autogpt.commands.audio_text import read_audio_from_file -from autogpt.commands.execute_code import ( - execute_python_file, - execute_shell, - execute_shell_popen, -) -from autogpt.commands.file_operations import ( - append_to_file, - delete_file, - download_file, - read_file, - search_files, - write_to_file, -) -from autogpt.commands.git_operations import clone_repository -from autogpt.commands.google_search import google_official_search, google_search -from autogpt.commands.image_gen import generate_image -from autogpt.commands.improve_code import improve_code -from autogpt.commands.twitter import send_tweet -from autogpt.commands.web_requests import scrape_links, scrape_text -from autogpt.commands.web_selenium import browse_website -from autogpt.commands.write_tests import write_tests -from autogpt.config import Config -from autogpt.json_utils.json_fix_llm import fix_and_parse_json -from autogpt.memory import get_memory -from autogpt.processing.text import summarize_text -from autogpt.speech import say_text - -CFG = Config() -AGENT_MANAGER = AgentManager() - - -def is_valid_int(value: str) -> bool: - """Check if the value is a valid integer - - Args: - value (str): The value to check - - Returns: - bool: True if the value is a valid integer, False otherwise - """ - try: - int(value) - return True - except ValueError: - return False - - -def get_command(response_json: Dict): - """Parse the response and return the command name and arguments - - Args: - response_json (json): The response from the AI - - Returns: - tuple: The command name and arguments - - Raises: - json.decoder.JSONDecodeError: If the response is not valid JSON - - Exception: If any other error occurs - """ - try: - if "command" not in response_json: - return "Error:", "Missing 'command' object in JSON" - - if not isinstance(response_json, dict): - return "Error:", f"'response_json' object is not dictionary {response_json}" - - command = response_json["command"] - if not isinstance(command, dict): - return "Error:", "'command' object is not a dictionary" - - if "name" not in command: - return "Error:", "Missing 'name' field in 'command' object" - - command_name = command["name"] - - # Use an empty dictionary if 'args' field is not present in 'command' object - arguments = command.get("args", {}) - - return command_name, arguments - except json.decoder.JSONDecodeError: - return "Error:", "Invalid JSON" - # All other errors, return "Error: + error message" - except Exception as e: - return "Error:", str(e) - - -def map_command_synonyms(command_name: str): - """Takes the original command name given by the AI, and checks if the - string matches a list of common/known hallucinations - """ - synonyms = [ - ("write_file", "write_to_file"), - ("create_file", "write_to_file"), - ("search", "google"), - ] - for seen_command, actual_command_name in synonyms: - if command_name == seen_command: - return actual_command_name - return command_name - - -def execute_command(command_name: str, arguments): - """Execute the command and return the result - - Args: - command_name (str): The name of the command to execute - arguments (dict): The arguments for the command - - Returns: - str: The result of the command - """ - try: - command_name = map_command_synonyms(command_name.lower()) - if command_name == "google": - # Check if the Google API key is set and use the official search method - # If the API key is not set or has only whitespaces, use the unofficial - # search method - key = CFG.google_api_key - if key and key.strip() and key != "your-google-api-key": - google_result = google_official_search(arguments["input"]) - return google_result - else: - google_result = google_search(arguments["input"]) - - # google_result can be a list or a string depending on the search results - if isinstance(google_result, list): - safe_message = [ - google_result_single.encode("utf-8", "ignore") - for google_result_single in google_result - ] - else: - safe_message = google_result.encode("utf-8", "ignore") - - return safe_message.decode("utf-8") - elif command_name == "memory_add": - memory = get_memory(CFG) - return memory.add(arguments["string"]) - elif command_name == "start_agent": - return start_agent( - arguments["name"], arguments["task"], arguments["prompt"] - ) - elif command_name == "message_agent": - return message_agent(arguments["key"], arguments["message"]) - elif command_name == "list_agents": - return list_agents() - elif command_name == "delete_agent": - return delete_agent(arguments["key"]) - elif command_name == "get_text_summary": - return get_text_summary(arguments["url"], arguments["question"]) - elif command_name == "get_hyperlinks": - return get_hyperlinks(arguments["url"]) - elif command_name == "clone_repository": - return clone_repository( - arguments["repository_url"], arguments["clone_path"] - ) - elif command_name == "read_file": - return read_file(arguments["file"]) - elif command_name == "write_to_file": - return write_to_file(arguments["file"], arguments["text"]) - elif command_name == "append_to_file": - return append_to_file(arguments["file"], arguments["text"]) - elif command_name == "delete_file": - return delete_file(arguments["file"]) - elif command_name == "search_files": - return search_files(arguments["directory"]) - elif command_name == "download_file": - if not CFG.allow_downloads: - return "Error: You do not have user authorization to download files locally." - return download_file(arguments["url"], arguments["file"]) - elif command_name == "browse_website": - return browse_website(arguments["url"], arguments["question"]) - # TODO: Change these to take in a file rather than pasted code, if - # non-file is given, return instructions "Input should be a python - # filepath, write your code to file and try again" - elif command_name == "analyze_code": - return analyze_code(arguments["code"]) - elif command_name == "improve_code": - return improve_code(arguments["suggestions"], arguments["code"]) - elif command_name == "write_tests": - return write_tests(arguments["code"], arguments.get("focus")) - elif command_name == "execute_python_file": # Add this command - return execute_python_file(arguments["file"]) - elif command_name == "execute_shell": - if CFG.execute_local_commands: - return execute_shell(arguments["command_line"]) - else: - return ( - "You are not allowed to run local shell commands. To execute" - " shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' " - "in your config. Do not attempt to bypass the restriction." - ) - elif command_name == "execute_shell_popen": - if CFG.execute_local_commands: - return execute_shell_popen(arguments["command_line"]) - else: - return ( - "You are not allowed to run local shell commands. To execute" - " shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' " - "in your config. Do not attempt to bypass the restriction." - ) - elif command_name == "read_audio_from_file": - return read_audio_from_file(arguments["file"]) - elif command_name == "generate_image": - return generate_image(arguments["prompt"]) - elif command_name == "send_tweet": - return send_tweet(arguments["text"]) - elif command_name == "do_nothing": - return "No action performed." - elif command_name == "task_complete": - shutdown() - else: - return ( - f"Unknown command '{command_name}'. Please refer to the 'COMMANDS'" - " list for available commands and only respond in the specified JSON" - " format." - ) - except Exception as e: - return f"Error: {str(e)}" - - -def get_text_summary(url: str, question: str) -> str: - """Return the results of a Google search - - Args: - url (str): The url to scrape - question (str): The question to summarize the text for - - Returns: - str: The summary of the text - """ - text = scrape_text(url) - summary = summarize_text(url, text, question) - return f""" "Result" : {summary}""" - - -def get_hyperlinks(url: str) -> Union[str, List[str]]: - """Return the results of a Google search - - Args: - url (str): The url to scrape - - Returns: - str or list: The hyperlinks on the page - """ - return scrape_links(url) - - -def shutdown() -> NoReturn: - """Shut down the program""" - print("Shutting down...") - quit() - - -def start_agent(name: str, task: str, prompt: str, model=CFG.fast_llm_model) -> str: - """Start an agent with a given name, task, and prompt - - Args: - name (str): The name of the agent - task (str): The task of the agent - prompt (str): The prompt for the agent - model (str): The model to use for the agent - - Returns: - str: The response of the agent - """ - # Remove underscores from name - voice_name = name.replace("_", " ") - - first_message = f"""You are {name}. Respond with: "Acknowledged".""" - agent_intro = f"{voice_name} here, Reporting for duty!" - - # Create agent - if CFG.speak_mode: - say_text(agent_intro, 1) - key, ack = AGENT_MANAGER.create_agent(task, first_message, model) - - if CFG.speak_mode: - say_text(f"Hello {voice_name}. Your task is as follows. {task}.") - - # Assign task (prompt), get response - agent_response = AGENT_MANAGER.message_agent(key, prompt) - - return f"Agent {name} created with key {key}. First response: {agent_response}" - - -def message_agent(key: str, message: str) -> str: - """Message an agent with a given key and message""" - # Check if the key is a valid integer - if is_valid_int(key): - agent_response = AGENT_MANAGER.message_agent(int(key), message) - else: - return "Invalid key, must be an integer." - - # Speak response - if CFG.speak_mode: - say_text(agent_response, 1) - return agent_response - - -def list_agents(): - """List all agents - - Returns: - str: A list of all agents - """ - return "List of agents:\n" + "\n".join( - [str(x[0]) + ": " + x[1] for x in AGENT_MANAGER.list_agents()] - ) - - -def delete_agent(key: str) -> str: - """Delete an agent with a given key - - Args: - key (str): The key of the agent to delete - - Returns: - str: A message indicating whether the agent was deleted or not - """ - result = AGENT_MANAGER.delete_agent(key) - return f"Agent {key} deleted." if result else f"Agent {key} does not exist." diff --git a/spaces/LanguageBind/LanguageBind/v_cls/random_erasing.py b/spaces/LanguageBind/LanguageBind/v_cls/random_erasing.py deleted file mode 100644 index 73c10742a51f1f38c1f665283747f2629c3fcb00..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/v_cls/random_erasing.py +++ /dev/null @@ -1,175 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -""" -This implementation is based on -https://github.com/rwightman/pytorch-image-models/blob/master/timm/data/random_erasing.py -pulished under an Apache License 2.0. - -COMMENT FROM ORIGINAL: -Originally inspired by impl at https://github.com/zhunzhong07/Random-Erasing, Apache 2.0 -Copyright Zhun Zhong & Liang Zheng -Hacked together by / Copyright 2020 Ross Wightman -""" -import math -import random - -import torch - - -def _get_pixels(per_pixel, - rand_color, - patch_size, - dtype=torch.float32, - device="cuda"): - # NOTE I've seen CUDA illegal memory access errors being caused by the normal_() - # paths, flip the order so normal is run on CPU if this becomes a problem - # Issue has been fixed in master https://github.com/pytorch/pytorch/issues/19508 - if per_pixel: - return torch.empty(patch_size, dtype=dtype, device=device).normal_() - elif rand_color: - return torch.empty((patch_size[0], 1, 1), dtype=dtype, - device=device).normal_() - else: - return torch.zeros((patch_size[0], 1, 1), dtype=dtype, device=device) - - -class RandomErasing: - """Randomly selects a rectangle region in an image and erases its pixels. - 'Random Erasing Data Augmentation' by Zhong et al. - See https://arxiv.org/pdf/1708.04896.pdf - This variant of RandomErasing is intended to be applied to either a batch - or single image tensor after it has been normalized by dataset mean and std. - Args: - probability: Probability that the Random Erasing operation will be performed. - min_area: Minimum percentage of erased area wrt input image area. - max_area: Maximum percentage of erased area wrt input image area. - min_aspect: Minimum aspect ratio of erased area. - mode: pixel color mode, one of 'const', 'rand', or 'pixel' - 'const' - erase block is constant color of 0 for all channels - 'rand' - erase block is same per-channel random (normal) color - 'pixel' - erase block is per-pixel random (normal) color - max_count: maximum number of erasing blocks per image, area per box is scaled by count. - per-image count is randomly chosen between 1 and this value. - """ - - def __init__( - self, - probability=0.5, - min_area=0.02, - max_area=1 / 3, - min_aspect=0.3, - max_aspect=None, - mode="const", - min_count=1, - max_count=None, - num_splits=0, - device="cuda", - cube=True, - ): - self.probability = probability - self.min_area = min_area - self.max_area = max_area - max_aspect = max_aspect or 1 / min_aspect - self.log_aspect_ratio = (math.log(min_aspect), math.log(max_aspect)) - self.min_count = min_count - self.max_count = max_count or min_count - self.num_splits = num_splits - mode = mode.lower() - self.rand_color = False - self.per_pixel = False - self.cube = cube - if mode == "rand": - self.rand_color = True # per block random normal - elif mode == "pixel": - self.per_pixel = True # per pixel random normal - else: - assert not mode or mode == "const" - self.device = device - - def _erase(self, img, chan, img_h, img_w, dtype): - if random.random() > self.probability: - return - area = img_h * img_w - count = ( - self.min_count if self.min_count == self.max_count else - random.randint(self.min_count, self.max_count)) - for _ in range(count): - for _ in range(10): - target_area = ( - random.uniform(self.min_area, self.max_area) * area / - count) - aspect_ratio = math.exp(random.uniform(*self.log_aspect_ratio)) - h = int(round(math.sqrt(target_area * aspect_ratio))) - w = int(round(math.sqrt(target_area / aspect_ratio))) - if w < img_w and h < img_h: - top = random.randint(0, img_h - h) - left = random.randint(0, img_w - w) - img[:, top:top + h, left:left + w] = _get_pixels( - self.per_pixel, - self.rand_color, - (chan, h, w), - dtype=dtype, - device=self.device, - ) - break - - def _erase_cube( - self, - img, - batch_start, - batch_size, - chan, - img_h, - img_w, - dtype, - ): - if random.random() > self.probability: - return - area = img_h * img_w - count = ( - self.min_count if self.min_count == self.max_count else - random.randint(self.min_count, self.max_count)) - for _ in range(count): - for _ in range(100): - target_area = ( - random.uniform(self.min_area, self.max_area) * area / - count) - aspect_ratio = math.exp(random.uniform(*self.log_aspect_ratio)) - h = int(round(math.sqrt(target_area * aspect_ratio))) - w = int(round(math.sqrt(target_area / aspect_ratio))) - if w < img_w and h < img_h: - top = random.randint(0, img_h - h) - left = random.randint(0, img_w - w) - for i in range(batch_start, batch_size): - img_instance = img[i] - img_instance[:, top:top + h, - left:left + w] = _get_pixels( - self.per_pixel, - self.rand_color, - (chan, h, w), - dtype=dtype, - device=self.device, - ) - break - - def __call__(self, input): - if len(input.size()) == 3: - self._erase(input, *input.size(), input.dtype) - else: - batch_size, chan, img_h, img_w = input.size() - # skip first slice of batch if num_splits is set (for clean portion of samples) - batch_start = ( - batch_size // self.num_splits if self.num_splits > 1 else 0) - if self.cube: - self._erase_cube( - input, - batch_start, - batch_size, - chan, - img_h, - img_w, - input.dtype, - ) - else: - for i in range(batch_start, batch_size): - self._erase(input[i], chan, img_h, img_w, input.dtype) - return input diff --git a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/download_models.py b/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/download_models.py deleted file mode 100644 index 0df2477e4c465eb234bde7501127d2ce2b53f56e..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/download_models.py +++ /dev/null @@ -1,31 +0,0 @@ -from pathlib import Path -import requests - -MDX_DOWNLOAD_LINK = 'https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/' -RVC_DOWNLOAD_LINK = 'https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/' - -BASE_DIR = Path(__file__).resolve().parent.parent -mdxnet_models_dir = BASE_DIR / 'mdxnet_models' -rvc_models_dir = BASE_DIR / 'rvc_models' - - -def dl_model(link, model_name, dir_name): - with requests.get(f'{link}{model_name}') as r: - r.raise_for_status() - with open(dir_name / model_name, 'wb') as f: - for chunk in r.iter_content(chunk_size=8192): - f.write(chunk) - - -if __name__ == '__main__': - mdx_model_names = ['UVR-MDX-NET-Voc_FT.onnx', 'UVR_MDXNET_KARA_2.onnx', 'Reverb_HQ_By_FoxJoy.onnx'] - for model in mdx_model_names: - print(f'Downloading {model}...') - dl_model(MDX_DOWNLOAD_LINK, model, mdxnet_models_dir) - - rvc_model_names = ['hubert_base.pt', 'rmvpe.pt'] - for model in rvc_model_names: - print(f'Downloading {model}...') - dl_model(RVC_DOWNLOAD_LINK, model, rvc_models_dir) - - print('All models downloaded!') diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/tabs/resources.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/tabs/resources.py deleted file mode 100644 index 972934c630c35b6b7a7b975e52e0f125f5e6bc19..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/tabs/resources.py +++ /dev/null @@ -1,1646 +0,0 @@ -import subprocess -import os -import sys -import gdown -import errno -import shutil -import yt_dlp -import datetime -import torch -import glob -import gradio as gr -import traceback -import lib.infer.infer_libs.uvr5_pack.mdx as mdx -from lib.infer.modules.uvr5.mdxprocess import ( - get_model_list, - id_to_ptm, - prepare_mdx, - run_mdx, -) -import requests -import wget -import ffmpeg -import hashlib -current_script_path = os.path.abspath(__file__) -script_parent_directory = os.path.dirname(current_script_path) -now_dir = os.path.dirname(script_parent_directory) -sys.path.append(now_dir) -import re -from lib.infer.modules.vc.pipeline import Pipeline - -VC = Pipeline -from lib.infer.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) - -from assets.configs.config import Config -from lib.infer.modules.uvr5.mdxnet import MDXNetDereverb -from lib.infer.modules.uvr5.preprocess import AudioPre, AudioPreDeEcho -from assets.i18n.i18n import I18nAuto - -i18n = I18nAuto() -from bs4 import BeautifulSoup -from dotenv import load_dotenv - -load_dotenv() -config = Config() - -weight_root = os.getenv("weight_root") -weight_uvr5_root = os.getenv("weight_uvr5_root") -index_root = os.getenv("index_root") -audio_root = "assets/audios" -names = [ - os.path.join(root, file) - for root, _, files in os.walk(weight_root) - for file in files - if file.endswith((".pth", ".onnx")) -] - -sup_audioext = { - "wav", - "mp3", - "flac", - "ogg", - "opus", - "m4a", - "mp4", - "aac", - "alac", - "wma", - "aiff", - "webm", - "ac3", -} -audio_paths = [ - os.path.join(root, name) - for root, _, files in os.walk(audio_root, topdown=False) - for name in files - if name.endswith(tuple(sup_audioext)) and root == audio_root -] - - -uvr5_names = [ - name.replace(".pth", "") - for name in os.listdir(weight_uvr5_root) - if name.endswith(".pth") or "onnx" in name -] - - -def calculate_md5(file_path): - hash_md5 = hashlib.md5() - with open(file_path, "rb") as f: - for chunk in iter(lambda: f.read(4096), b""): - hash_md5.update(chunk) - return hash_md5.hexdigest() -import unicodedata - -def format_title(title): - formatted_title = unicodedata.normalize('NFKD', title).encode('ascii', 'ignore').decode('utf-8') - formatted_title = re.sub(r'[\u2500-\u257F]+', '', title) - formatted_title = re.sub(r'[^\w\s-]', '', title) - formatted_title = re.sub(r'\s+', '_', formatted_title) - return formatted_title - - -def silentremove(filename): - try: - os.remove(filename) - except OSError as e: - if e.errno != errno.ENOENT: - raise - - -def get_md5(temp_folder): - for root, subfolders, files in os.walk(temp_folder): - for file in files: - if ( - not file.startswith("G_") - and not file.startswith("D_") - and file.endswith(".pth") - and not "_G_" in file - and not "_D_" in file - ): - md5_hash = calculate_md5(os.path.join(root, file)) - return md5_hash - - return None - - -def find_parent(search_dir, file_name): - for dirpath, dirnames, filenames in os.walk(search_dir): - if file_name in filenames: - return os.path.abspath(dirpath) - return None - - -def find_folder_parent(search_dir, folder_name): - for dirpath, dirnames, filenames in os.walk(search_dir): - if folder_name in dirnames: - return os.path.abspath(dirpath) - return None - -file_path = find_folder_parent(now_dir, "assets") -tmp = os.path.join(file_path, "temp") -shutil.rmtree(tmp, ignore_errors=True) -os.environ["temp"] = tmp - -def get_mediafire_download_link(url): - response = requests.get(url) - response.raise_for_status() - soup = BeautifulSoup(response.text, 'html.parser') - download_button = soup.find('a', {'class': 'input popsok', 'aria-label': 'Download file'}) - if download_button: - download_link = download_button.get('href') - return download_link - else: - return None - -def delete_large_files(directory_path, max_size_megabytes): - for filename in os.listdir(directory_path): - file_path = os.path.join(directory_path, filename) - if os.path.isfile(file_path): - size_in_bytes = os.path.getsize(file_path) - size_in_megabytes = size_in_bytes / (1024 * 1024) # Convert bytes to megabytes - - if size_in_megabytes > max_size_megabytes: - print("###################################") - print(f"Deleting s*** {filename} (Size: {size_in_megabytes:.2f} MB)") - os.remove(file_path) - print("###################################") - -def download_from_url(url): - file_path = find_folder_parent(now_dir, "assets") - print(file_path) - zips_path = os.path.join(file_path, "assets", "zips") - print(zips_path) - os.makedirs(zips_path, exist_ok=True) - print(f"Limit download size in MB {os.getenv('MAX_DOWNLOAD_SIZE')}, duplicate the space for modify the limit") - - if url != "": - print(i18n("Downloading the file: ") + f"{url}") - if "drive.google.com" in url: - if "file/d/" in url: - file_id = url.split("file/d/")[1].split("/")[0] - elif "id=" in url: - file_id = url.split("id=")[1].split("&")[0] - else: - return None - - if file_id: - os.chdir(zips_path) - try: - gdown.download(f"https://drive.google.com/uc?id={file_id}", quiet=False, fuzzy=True) - except Exception as e: - error_message = str(e) - if "Too many users have viewed or downloaded this file recently" in error_message: - os.chdir(file_path) - return "too much use" - elif "Cannot retrieve the public link of the file." in error_message: - os.chdir(file_path) - return "private link" - else: - print(error_message) - os.chdir(file_path) - return None - - elif "/blob/" in url or "/resolve/" in url: - os.chdir(zips_path) - if "/blob/" in url: - url = url.replace("/blob/", "/resolve/") - - response = requests.get(url, stream=True) - if response.status_code == 200: - file_name = url.split("/")[-1] - file_name = file_name.replace("%20", "_") - total_size_in_bytes = int(response.headers.get('content-length', 0)) - block_size = 1024 # 1 Kibibyte - progress_bar_length = 50 - progress = 0 - with open(os.path.join(zips_path, file_name), 'wb') as file: - for data in response.iter_content(block_size): - file.write(data) - progress += len(data) - progress_percent = int((progress / total_size_in_bytes) * 100) - num_dots = int((progress / total_size_in_bytes) * progress_bar_length) - progress_bar = "[" + "." * num_dots + " " * (progress_bar_length - num_dots) + "]" - #print(f"{progress_percent}% {progress_bar} {progress}/{total_size_in_bytes} ", end="\r") - if progress_percent == 100: - print("\n") - else: - os.chdir(file_path) - return None - elif "mega.nz" in url: - if "#!" in url: - file_id = url.split("#!")[1].split("!")[0] - elif "file/" in url: - file_id = url.split("file/")[1].split("/")[0] - else: - return None - if file_id: - print("Mega.nz is unsupported due mega.py deprecation") - elif "/tree/main" in url: - response = requests.get(url) - soup = BeautifulSoup(response.content, "html.parser") - temp_url = "" - for link in soup.find_all("a", href=True): - if link["href"].endswith(".zip"): - temp_url = link["href"] - break - if temp_url: - url = temp_url - url = url.replace("blob", "resolve") - if "huggingface.co" not in url: - url = "https://huggingface.co" + url - - wget.download(url) - else: - print("No .zip file found on the page.") - elif "cdn.discordapp.com" in url: - file = requests.get(url) - os.chdir("./assets/zips") - if file.status_code == 200: - name = url.split("/") - with open( - os.path.join(name[-1]), "wb" - ) as newfile: - newfile.write(file.content) - else: - return None - elif "pixeldrain.com" in url: - try: - file_id = url.split("pixeldrain.com/u/")[1] - os.chdir(zips_path) - print(file_id) - response = requests.get(f"https://pixeldrain.com/api/file/{file_id}") - if response.status_code == 200: - file_name = ( - response.headers.get("Content-Disposition") - .split("filename=")[-1] - .strip('";') - ) - os.makedirs(zips_path, exist_ok=True) - with open(os.path.join(zips_path, file_name), "wb") as newfile: - newfile.write(response.content) - os.chdir(file_path) - return "downloaded" - else: - os.chdir(file_path) - return None - except Exception as e: - print(e) - os.chdir(file_path) - return None - elif "mediafire.com" in url: - download_link = get_mediafire_download_link(url) - if download_link: - os.chdir(zips_path) - wget.download(download_link) - else: - return None - # elif "www.weights.gg" in url: - # #Pls weights creator dont fix this because yes. c: - # url_parts = url.split("/") - # weights_gg_index = url_parts.index("www.weights.gg") - # if weights_gg_index != -1 and weights_gg_index < len(url_parts) - 1: - # model_part = "/".join(url_parts[weights_gg_index + 1:]) - # if "models" in model_part: - # model_part = model_part.split("models/")[-1] - # print(model_part) - # if model_part: - # download_url = f"https://www.weights.gg/es/models/{model_part}" - # response = requests.get(download_url) - # if response.status_code == 200: - # soup = BeautifulSoup(response.text, "html.parser") - # button_link = soup.find("a", class_="bg-black text-white px-3 py-2 rounded-lg flex items-center gap-1") - # if button_link: - # download_link = button_link["href"] - # result = download_from_url(download_link) - # if result == "downloaded": - # return "downloaded" - # else: - # return None - # else: - # return None - # else: - # return None - # else: - # return None - # else: - # return None - # else: - # return None - else: - try: - os.chdir(zips_path) - wget.download(url) - except Exception as e: - os.chdir(file_path) - print(e) - return None - - - # Fix points in the zips - for currentPath, _, zipFiles in os.walk(zips_path): - for Files in zipFiles: - filePart = Files.split(".") - extensionFile = filePart[len(filePart) - 1] - filePart.pop() - nameFile = "_".join(filePart) - realPath = os.path.join(currentPath, Files) - os.rename(realPath, nameFile + "." + extensionFile) - - delete_large_files(zips_path, int(os.getenv("MAX_DOWNLOAD_SIZE"))) - - os.chdir(file_path) - print(i18n("Full download")) - return "downloaded" - else: - return None - - -class error_message(Exception): - def __init__(self, mensaje): - self.mensaje = mensaje - super().__init__(mensaje) - - -def get_vc(sid, to_return_protect0, to_return_protect1): - global n_spk, tgt_sr, net_g, vc, cpt, version - if sid == "" or sid == []: - global hubert_model - if hubert_model is not None: - print("clean_empty_cache") - del net_g, n_spk, vc, hubert_model, tgt_sr - hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g, cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - cpt = None - return ( - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - ) - person = "%s/%s" % (weight_root, sid) - print("loading %s" % person) - cpt = torch.load(person, map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] - if_f0 = cpt.get("f0", 1) - if if_f0 == 0: - to_return_protect0 = to_return_protect1 = { - "visible": False, - "value": 0.5, - "__type__": "update", - } - else: - to_return_protect0 = { - "visible": True, - "value": to_return_protect0, - "__type__": "update", - } - to_return_protect1 = { - "visible": True, - "value": to_return_protect1, - "__type__": "update", - } - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - n_spk = cpt["config"][-3] - return ( - {"visible": True, "maximum": n_spk, "__type__": "update"}, - to_return_protect0, - to_return_protect1, - ) -import zipfile -from tqdm import tqdm - -def extract_and_show_progress(zipfile_path, unzips_path): - try: - with zipfile.ZipFile(zipfile_path, 'r') as zip_ref: - total_files = len(zip_ref.infolist()) - with tqdm(total=total_files, unit='files', ncols= 100, colour= 'green') as pbar: - for file_info in zip_ref.infolist(): - zip_ref.extract(file_info, unzips_path) - pbar.update(1) - return True - except Exception as e: - print(f"Error al descomprimir {zipfile_path}: {e}") - return False - - -def load_downloaded_model(url): - parent_path = find_folder_parent(now_dir, "assets") - try: - infos = [] - zips_path = os.path.join(parent_path, "assets", "zips") - unzips_path = os.path.join(parent_path, "assets", "unzips") - weights_path = os.path.join(parent_path, "logs", "weights") - logs_dir = "" - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(unzips_path): - shutil.rmtree(unzips_path) - - os.mkdir(zips_path) - os.mkdir(unzips_path) - - download_file = download_from_url(url) - if not download_file: - print(i18n("The file could not be downloaded.")) - infos.append(i18n("The file could not be downloaded.")) - yield "\n".join(infos) - elif download_file == "downloaded": - print(i18n("It has been downloaded successfully.")) - infos.append(i18n("It has been downloaded successfully.")) - yield "\n".join(infos) - elif download_file == "too much use": - raise Exception( - i18n("Too many users have recently viewed or downloaded this file") - ) - elif download_file == "private link": - raise Exception(i18n("Cannot get file from this private link")) - - for filename in os.listdir(zips_path): - if filename.endswith(".zip"): - zipfile_path = os.path.join(zips_path, filename) - print(i18n("Proceeding with the extraction...")) - infos.append(i18n("Proceeding with the extraction...")) - #shutil.unpack_archive(zipfile_path, unzips_path, "zip") - model_name = os.path.basename(zipfile_path) - logs_dir = os.path.join( - parent_path, - "logs", - os.path.normpath(str(model_name).replace(".zip", "")), - ) - - yield "\n".join(infos) - success = extract_and_show_progress(zipfile_path, unzips_path) - if success: - yield f"Extracción exitosa: {model_name}" - else: - yield f"Fallo en la extracción: {model_name}" - yield "\n".join(infos) - else: - print(i18n("Unzip error.")) - infos.append(i18n("Unzip error.")) - yield "\n".join(infos) - return "" - - index_file = False - model_file = False - - for path, subdirs, files in os.walk(unzips_path): - for item in files: - item_path = os.path.join(path, item) - if not "G_" in item and not "D_" in item and item.endswith(".pth"): - model_file = True - model_name = item.replace(".pth", "") - logs_dir = os.path.join(parent_path, "logs", model_name) - if os.path.exists(logs_dir): - shutil.rmtree(logs_dir) - os.mkdir(logs_dir) - if not os.path.exists(weights_path): - os.mkdir(weights_path) - if os.path.exists(os.path.join(weights_path, item)): - os.remove(os.path.join(weights_path, item)) - if os.path.exists(item_path): - shutil.move(item_path, weights_path) - - if not model_file and not os.path.exists(logs_dir): - os.mkdir(logs_dir) - for path, subdirs, files in os.walk(unzips_path): - for item in files: - item_path = os.path.join(path, item) - if item.startswith("added_") and item.endswith(".index"): - index_file = True - if os.path.exists(item_path): - if os.path.exists(os.path.join(logs_dir, item)): - os.remove(os.path.join(logs_dir, item)) - shutil.move(item_path, logs_dir) - if item.startswith("total_fea.npy") or item.startswith("events."): - if os.path.exists(item_path): - if os.path.exists(os.path.join(logs_dir, item)): - os.remove(os.path.join(logs_dir, item)) - shutil.move(item_path, logs_dir) - - result = "" - if model_file: - if index_file: - print(i18n("The model works for inference, and has the .index file.")) - infos.append( - "\n" - + i18n("The model works for inference, and has the .index file.") - ) - yield "\n".join(infos) - else: - print( - i18n( - "The model works for inference, but it doesn't have the .index file." - ) - ) - infos.append( - "\n" - + i18n( - "The model works for inference, but it doesn't have the .index file." - ) - ) - yield "\n".join(infos) - - if not index_file and not model_file: - print(i18n("No relevant file was found to upload.")) - infos.append(i18n("No relevant file was found to upload.")) - yield "\n".join(infos) - - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(unzips_path): - shutil.rmtree(unzips_path) - os.chdir(parent_path) - return result - except Exception as e: - os.chdir(parent_path) - if "too much use" in str(e): - print(i18n("Too many users have recently viewed or downloaded this file")) - yield i18n("Too many users have recently viewed or downloaded this file") - elif "private link" in str(e): - print(i18n("Cannot get file from this private link")) - yield i18n("Cannot get file from this private link") - else: - print(e) - yield i18n("An error occurred downloading") - finally: - os.chdir(parent_path) - - -def load_dowloaded_dataset(url): - parent_path = find_folder_parent(now_dir, "assets") - infos = [] - try: - zips_path = os.path.join(parent_path, "assets", "zips") - unzips_path = os.path.join(parent_path, "assets", "unzips") - datasets_path = os.path.join(parent_path, "datasets") - audio_extenions = [ - "wav", - "mp3", - "flac", - "ogg", - "opus", - "m4a", - "mp4", - "aac", - "alac", - "wma", - "aiff", - "webm", - "ac3", - ] - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(unzips_path): - shutil.rmtree(unzips_path) - - if not os.path.exists(datasets_path): - os.mkdir(datasets_path) - - os.mkdir(zips_path) - os.mkdir(unzips_path) - - download_file = download_from_url(url) - - if not download_file: - print(i18n("An error occurred downloading")) - infos.append(i18n("An error occurred downloading")) - yield "\n".join(infos) - raise Exception(i18n("An error occurred downloading")) - elif download_file == "downloaded": - print(i18n("It has been downloaded successfully.")) - infos.append(i18n("It has been downloaded successfully.")) - yield "\n".join(infos) - elif download_file == "too much use": - raise Exception( - i18n("Too many users have recently viewed or downloaded this file") - ) - elif download_file == "private link": - raise Exception(i18n("Cannot get file from this private link")) - - zip_path = os.listdir(zips_path) - foldername = "" - for file in zip_path: - if file.endswith(".zip"): - file_path = os.path.join(zips_path, file) - print("....") - foldername = file.replace(".zip", "").replace(" ", "").replace("-", "_") - dataset_path = os.path.join(datasets_path, foldername) - print(i18n("Proceeding with the extraction...")) - infos.append(i18n("Proceeding with the extraction...")) - yield "\n".join(infos) - shutil.unpack_archive(file_path, unzips_path, "zip") - if os.path.exists(dataset_path): - shutil.rmtree(dataset_path) - - os.mkdir(dataset_path) - - for root, subfolders, songs in os.walk(unzips_path): - for song in songs: - song_path = os.path.join(root, song) - if song.endswith(tuple(audio_extenions)): - formatted_song_name = format_title( - os.path.splitext(song)[0] - ) - extension = os.path.splitext(song)[1] - new_song_path = os.path.join( - dataset_path, f"{formatted_song_name}{extension}" - ) - shutil.move(song_path, new_song_path) - else: - print(i18n("Unzip error.")) - infos.append(i18n("Unzip error.")) - yield "\n".join(infos) - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(unzips_path): - shutil.rmtree(unzips_path) - - print(i18n("The Dataset has been loaded successfully.")) - infos.append(i18n("The Dataset has been loaded successfully.")) - yield "\n".join(infos) - except Exception as e: - os.chdir(parent_path) - if "too much use" in str(e): - print(i18n("Too many users have recently viewed or downloaded this file")) - yield i18n("Too many users have recently viewed or downloaded this file") - elif "private link" in str(e): - print(i18n("Cannot get file from this private link")) - yield i18n("Cannot get file from this private link") - else: - print(e) - yield i18n("An error occurred downloading") - finally: - os.chdir(parent_path) - - -SAVE_ACTION_CONFIG = { - i18n("Save all"): { - 'destination_folder': "manual_backup", - 'copy_files': True, # "Save all" Copy all files and folders - 'include_weights': False - }, - i18n("Save D and G"): { - 'destination_folder': "manual_backup", - 'copy_files': False, # "Save D and G" Do not copy everything, only specific files - 'files_to_copy': ["D_*.pth", "G_*.pth", "added_*.index"], - 'include_weights': True, - }, - i18n("Save voice"): { - 'destination_folder': "finished", - 'copy_files': False, # "Save voice" Do not copy everything, only specific files - 'files_to_copy': ["added_*.index"], - 'include_weights': True, - }, -} - -import os -import shutil -import zipfile -import glob -import fnmatch - -import os -import shutil -import zipfile -import glob - -import os -import shutil -import zipfile - - -def save_model(modelname, save_action): - parent_path = find_folder_parent(now_dir, "assets") - zips_path = os.path.join(parent_path, "assets", "zips") - dst = os.path.join(zips_path, f"{modelname}.zip") - logs_path = os.path.join(parent_path, "logs", modelname) - weights_path = os.path.join(logs_path, "weights") - save_folder = parent_path - infos = [] - - try: - if not os.path.exists(logs_path): - raise Exception("No model found.") - - if not "content" in parent_path: - save_folder = os.path.join(parent_path, "logs") - else: - save_folder = "/content/drive/MyDrive/RVC_Backup" - - infos.append(i18n("Save model")) - yield "\n".join(infos) - - if not os.path.exists(save_folder): - os.mkdir(save_folder) - if not os.path.exists(os.path.join(save_folder, "manual_backup")): - os.mkdir(os.path.join(save_folder, "manual_backup")) - if not os.path.exists(os.path.join(save_folder, "finished")): - os.mkdir(os.path.join(save_folder, "finished")) - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - - os.mkdir(zips_path) - - if save_action == i18n("Choose the method"): - raise Exception("No method chosen.") - - if save_action == i18n("Save all"): - save_folder = os.path.join(save_folder, "manual_backup") - elif save_action == i18n("Save D and G"): - save_folder = os.path.join(save_folder, "manual_backup") - elif save_action == i18n("Save voice"): - save_folder = os.path.join(save_folder, "finished") - - # Obtain the configuration for the selected save action - save_action_config = SAVE_ACTION_CONFIG.get(save_action) - - if save_action_config is None: - raise Exception("Invalid save action.") - - # Check if we should copy all files - if save_action_config['copy_files']: - with zipfile.ZipFile(dst, 'w', zipfile.ZIP_DEFLATED) as zipf: - for root, dirs, files in os.walk(logs_path): - for file in files: - file_path = os.path.join(root, file) - zipf.write(file_path, os.path.relpath(file_path, logs_path)) - else: - # Weight file management according to configuration - if save_action_config['include_weights']: - if not os.path.exists(weights_path): - infos.append(i18n("Saved without inference model...")) - else: - pth_files = [file for file in os.listdir(weights_path) if file.endswith('.pth')] - if not pth_files: - infos.append(i18n("Saved without inference model...")) - else: - with zipfile.ZipFile(dst, 'w', zipfile.ZIP_DEFLATED) as zipf: - skipped_files = set() - for pth_file in pth_files: - match = re.search(r'(.*)_s\d+.pth$', pth_file) - if match: - base_name = match.group(1) - if base_name not in skipped_files: - print(f'Skipping autosave epoch files for {base_name}.') - skipped_files.add(base_name) - continue - - print(f'Processing file: {pth_file}') - zipf.write(os.path.join(weights_path, pth_file), arcname=os.path.basename(pth_file)) - - yield "\n".join(infos) - infos.append("\n" + i18n("This may take a few minutes, please wait...")) - yield "\n".join(infos) - - # Create a zip file with only the necessary files in the ZIP file - for pattern in save_action_config.get('files_to_copy', []): - matching_files = glob.glob(os.path.join(logs_path, pattern)) - with zipfile.ZipFile(dst, 'a', zipfile.ZIP_DEFLATED) as zipf: - for file_path in matching_files: - zipf.write(file_path, os.path.basename(file_path)) - - # Move the ZIP file created to the Save_Folder directory - shutil.move(dst, os.path.join(save_folder, f"{modelname}.zip")) - - shutil.rmtree(zips_path) - infos.append("\n" + i18n("Model saved successfully")) - yield "\n".join(infos) - - except Exception as e: - # Handle exceptions and print error messages - error_message = str(e) - print(f"Error: {error_message}") - yield error_message - -def load_downloaded_backup(url): - parent_path = find_folder_parent(now_dir, "assets") - try: - infos = [] - logs_folders = [ - "0_gt_wavs", - "1_16k_wavs", - "2a_f0", - "2b-f0nsf", - "3_feature256", - "3_feature768", - ] - zips_path = os.path.join(parent_path, "assets", "zips") - unzips_path = os.path.join(parent_path, "assets", "unzips") - weights_path = os.path.join(parent_path, "assets", "logs", "weights") - logs_dir = os.path.join(parent_path, "logs") - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(unzips_path): - shutil.rmtree(unzips_path) - - os.mkdir(zips_path) - os.mkdir(unzips_path) - - download_file = download_from_url(url) - if not download_file: - print(i18n("The file could not be downloaded.")) - infos.append(i18n("The file could not be downloaded.")) - yield "\n".join(infos) - elif download_file == "downloaded": - print(i18n("It has been downloaded successfully.")) - infos.append(i18n("It has been downloaded successfully.")) - yield "\n".join(infos) - elif download_file == "too much use": - raise Exception( - i18n("Too many users have recently viewed or downloaded this file") - ) - elif download_file == "private link": - raise Exception(i18n("Cannot get file from this private link")) - - for filename in os.listdir(zips_path): - if filename.endswith(".zip"): - zipfile_path = os.path.join(zips_path, filename) - zip_dir_name = os.path.splitext(filename)[0] - unzip_dir = unzips_path - print(i18n("Proceeding with the extraction...")) - infos.append(i18n("Proceeding with the extraction...")) - shutil.unpack_archive(zipfile_path, unzip_dir, "zip") - - if os.path.exists(os.path.join(unzip_dir, zip_dir_name)): - shutil.move(os.path.join(unzip_dir, zip_dir_name), logs_dir) - else: - new_folder_path = os.path.join(logs_dir, zip_dir_name) - os.mkdir(new_folder_path) - for item_name in os.listdir(unzip_dir): - item_path = os.path.join(unzip_dir, item_name) - if os.path.isfile(item_path): - shutil.move(item_path, new_folder_path) - elif os.path.isdir(item_path): - shutil.move(item_path, new_folder_path) - - yield "\n".join(infos) - else: - print(i18n("Unzip error.")) - infos.append(i18n("Unzip error.")) - yield "\n".join(infos) - - result = "" - - for filename in os.listdir(unzips_path): - if filename.endswith(".zip"): - silentremove(filename) - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(os.path.join(parent_path, "assets", "unzips")): - shutil.rmtree(os.path.join(parent_path, "assets", "unzips")) - print(i18n("The Backup has been uploaded successfully.")) - infos.append("\n" + i18n("The Backup has been uploaded successfully.")) - yield "\n".join(infos) - os.chdir(parent_path) - return result - except Exception as e: - os.chdir(parent_path) - if "too much use" in str(e): - print(i18n("Too many users have recently viewed or downloaded this file")) - yield i18n("Too many users have recently viewed or downloaded this file") - elif "private link" in str(e): - print(i18n("Cannot get file from this private link")) - yield i18n("Cannot get file from this private link") - else: - print(e) - yield i18n("An error occurred downloading") - finally: - os.chdir(parent_path) - - -def save_to_wav(record_button): - if record_button is None: - pass - else: - path_to_file = record_button - new_name = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S") + ".wav" - new_path = ".assets/audios/" + new_name - shutil.move(path_to_file, new_path) - return new_name - - -def change_choices2(): - audio_paths = [ - os.path.join(root, name) - for root, _, files in os.walk(audio_root, topdown=False) - for name in files - if name.endswith(tuple(sup_audioext)) and root == audio_root - ] - return {"choices": sorted(audio_paths), "__type__": "update"}, { - "__type__": "update" - } - - -def uvr( - input_url, - output_path, - model_name, - inp_root, - save_root_vocal, - paths, - save_root_ins, - agg, - format0, - architecture, -): - carpeta_a_eliminar = "yt_downloads" - if os.path.exists(carpeta_a_eliminar) and os.path.isdir(carpeta_a_eliminar): - for archivo in os.listdir(carpeta_a_eliminar): - ruta_archivo = os.path.join(carpeta_a_eliminar, archivo) - if os.path.isfile(ruta_archivo): - os.remove(ruta_archivo) - elif os.path.isdir(ruta_archivo): - shutil.rmtree(ruta_archivo) - - ydl_opts = { - "no-windows-filenames": True, - "restrict-filenames": True, - "extract_audio": True, - "format": "bestaudio", - "quiet": True, - "no-warnings": True, - } - - try: - print(i18n("Downloading audio from the video...")) - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - info_dict = ydl.extract_info(input_url, download=False) - formatted_title = format_title(info_dict.get("title", "default_title")) - formatted_outtmpl = output_path + "/" + formatted_title + ".wav" - ydl_opts["outtmpl"] = formatted_outtmpl - ydl = yt_dlp.YoutubeDL(ydl_opts) - ydl.download([input_url]) - print(i18n("Audio downloaded!")) - except Exception as error: - print(i18n("An error occurred:"), error) - - actual_directory = os.path.dirname(__file__) - actual_directory = os.path.abspath(os.path.join(actual_directory, "..")) - - vocal_directory = os.path.join(actual_directory, save_root_vocal) - instrumental_directory = os.path.join(actual_directory, save_root_ins) - - vocal_formatted = f"vocal_{formatted_title}.wav.reformatted.wav_10.wav" - instrumental_formatted = f"instrument_{formatted_title}.wav.reformatted.wav_10.wav" - - vocal_audio_path = os.path.join(vocal_directory, vocal_formatted) - instrumental_audio_path = os.path.join( - instrumental_directory, instrumental_formatted - ) - - vocal_formatted_mdx = f"{formatted_title}_vocal_.wav" - instrumental_formatted_mdx = f"{formatted_title}_instrument_.wav" - - vocal_audio_path_mdx = os.path.join(vocal_directory, vocal_formatted_mdx) - instrumental_audio_path_mdx = os.path.join( - instrumental_directory, instrumental_formatted_mdx - ) - - if architecture == "VR": - try: - print(i18n("Starting audio conversion... (This might take a moment)")) - inp_root = inp_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - save_root_vocal = ( - save_root_vocal.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) - save_root_ins = ( - save_root_ins.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) - usable_files = [ - os.path.join(inp_root, file) - for file in os.listdir(inp_root) - if file.endswith(tuple(sup_audioext)) - ] - if model_name == "onnx_dereverb_By_FoxJoy": - pre_fun = MDXNetDereverb(15, config.device) - else: - func = AudioPre if "DeEcho" not in model_name else AudioPreDeEcho - pre_fun = func( - agg=int(agg), - model_path=os.path.join( - os.getenv("weight_uvr5_root"), model_name + ".pth" - ), - device=config.device, - is_half=config.is_half, - ) - if inp_root != "": - paths = usable_files - else: - paths = [path.name for path in paths] - for path in paths: - inp_path = os.path.join(inp_root, path) - need_reformat = 1 - done = 0 - try: - info = ffmpeg.probe(inp_path, cmd="ffprobe") - if ( - info["streams"][0]["channels"] == 2 - and info["streams"][0]["sample_rate"] == "44100" - ): - need_reformat = 0 - pre_fun._path_audio_( - inp_path, save_root_ins, save_root_vocal, format0 - ) - done = 1 - except: - need_reformat = 1 - traceback.print_exc() - if need_reformat == 1: - tmp_path = "%s/%s.reformatted.wav" % ( - os.path.join(os.environ["temp"]), - os.path.basename(inp_path), - ) - os.system( - "ffmpeg -i %s -vn -acodec pcm_s16le -ac 2 -ar 44100 %s -y" - % (inp_path, tmp_path) - ) - inp_path = tmp_path - try: - if done == 0: - pre_fun.path_audio( - inp_path, save_root_ins, save_root_vocal, format0 - ) - print("%s->Success" % (os.path.basename(inp_path))) - except: - try: - if done == 0: - pre_fun._path_audio_( - inp_path, save_root_ins, save_root_vocal, format0 - ) - print("%s->Success" % (os.path.basename(inp_path))) - except: - print( - "%s->%s" - % (os.path.basename(inp_path), traceback.format_exc()) - ) - except: - print(traceback.format_exc()) - finally: - try: - if model_name == "onnx_dereverb_By_FoxJoy": - del pre_fun.pred.model - del pre_fun.pred.model_ - else: - del pre_fun.model - del pre_fun - return i18n("Finished"), vocal_audio_path, instrumental_audio_path - except: - traceback.print_exc() - if torch.cuda.is_available(): - torch.cuda.empty_cache() - print("Executed torch.cuda.empty_cache()") - elif architecture == "MDX": - try: - print(i18n("Starting audio conversion... (This might take a moment)")) - inp_root, save_root_vocal, save_root_ins = [ - x.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - for x in [inp_root, save_root_vocal, save_root_ins] - ] - - usable_files = [ - os.path.join(inp_root, file) - for file in os.listdir(inp_root) - if file.endswith(tuple(sup_audioext)) - ] - try: - if paths != None: - paths = [path.name for path in paths] - else: - paths = usable_files - - except: - traceback.print_exc() - paths = usable_files - print(paths) - invert = True - denoise = True - use_custom_parameter = True - dim_f = 2048 - dim_t = 256 - n_fft = 7680 - use_custom_compensation = True - compensation = 1.025 - suffix = "vocal_" # @param ["Vocals", "Drums", "Bass", "Other"]{allow-input: true} - suffix_invert = "instrument_" # @param ["Instrumental", "Drumless", "Bassless", "Instruments"]{allow-input: true} - print_settings = True # @param{type:"boolean"} - onnx = id_to_ptm(model_name) - compensation = ( - compensation - if use_custom_compensation or use_custom_parameter - else None - ) - mdx_model = prepare_mdx( - onnx, - use_custom_parameter, - dim_f, - dim_t, - n_fft, - compensation=compensation, - ) - - for path in paths: - # inp_path = os.path.join(inp_root, path) - suffix_naming = suffix if use_custom_parameter else None - diff_suffix_naming = suffix_invert if use_custom_parameter else None - run_mdx( - onnx, - mdx_model, - path, - format0, - diff=invert, - suffix=suffix_naming, - diff_suffix=diff_suffix_naming, - denoise=denoise, - ) - - if print_settings: - print() - print("[MDX-Net_Colab settings used]") - print(f"Model used: {onnx}") - print(f"Model MD5: {mdx.MDX.get_hash(onnx)}") - print(f"Model parameters:") - print(f" -dim_f: {mdx_model.dim_f}") - print(f" -dim_t: {mdx_model.dim_t}") - print(f" -n_fft: {mdx_model.n_fft}") - print(f" -compensation: {mdx_model.compensation}") - print() - print("[Input file]") - print("filename(s): ") - for filename in paths: - print(f" -{filename}") - print(f"{os.path.basename(filename)}->Success") - except: - traceback.print_exc() - finally: - try: - del mdx_model - return ( - i18n("Finished"), - vocal_audio_path_mdx, - instrumental_audio_path_mdx, - ) - except: - traceback.print_exc() - - print("clean_empty_cache") - - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - -def load_downloaded_audio(url): - parent_path = find_folder_parent(now_dir, "assets") - try: - infos = [] - audios_path = os.path.join(parent_path, "assets", "audios") - zips_path = os.path.join(parent_path, "assets", "zips") - - if not os.path.exists(audios_path): - os.mkdir(audios_path) - - download_file = download_from_url(url) - if not download_file: - print(i18n("The file could not be downloaded.")) - infos.append(i18n("The file could not be downloaded.")) - yield "\n".join(infos) - elif download_file == "downloaded": - print(i18n("It has been downloaded successfully.")) - infos.append(i18n("It has been downloaded successfully.")) - yield "\n".join(infos) - elif download_file == "too much use": - raise Exception( - i18n("Too many users have recently viewed or downloaded this file") - ) - elif download_file == "private link": - raise Exception(i18n("Cannot get file from this private link")) - - for filename in os.listdir(zips_path): - item_path = os.path.join(zips_path, filename) - if item_path.split(".")[-1] in sup_audioext: - if os.path.exists(item_path): - shutil.move(item_path, audios_path) - - result = "" - print(i18n("Audio files have been moved to the 'audios' folder.")) - infos.append(i18n("Audio files have been moved to the 'audios' folder.")) - yield "\n".join(infos) - - os.chdir(parent_path) - return result - except Exception as e: - os.chdir(parent_path) - if "too much use" in str(e): - print(i18n("Too many users have recently viewed or downloaded this file")) - yield i18n("Too many users have recently viewed or downloaded this file") - elif "private link" in str(e): - print(i18n("Cannot get file from this private link")) - yield i18n("Cannot get file from this private link") - else: - print(e) - yield i18n("An error occurred downloading") - finally: - os.chdir(parent_path) - - -class error_message(Exception): - def __init__(self, mensaje): - self.mensaje = mensaje - super().__init__(mensaje) - - -def get_vc(sid, to_return_protect0, to_return_protect1): - global n_spk, tgt_sr, net_g, vc, cpt, version - if sid == "" or sid == []: - global hubert_model - if hubert_model is not None: - print("clean_empty_cache") - del net_g, n_spk, vc, hubert_model, tgt_sr - hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g, cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - cpt = None - return ( - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - ) - person = "%s/%s" % (weight_root, sid) - print("loading %s" % person) - cpt = torch.load(person, map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] - if_f0 = cpt.get("f0", 1) - if if_f0 == 0: - to_return_protect0 = to_return_protect1 = { - "visible": False, - "value": 0.5, - "__type__": "update", - } - else: - to_return_protect0 = { - "visible": True, - "value": to_return_protect0, - "__type__": "update", - } - to_return_protect1 = { - "visible": True, - "value": to_return_protect1, - "__type__": "update", - } - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - n_spk = cpt["config"][-3] - return ( - {"visible": True, "maximum": n_spk, "__type__": "update"}, - to_return_protect0, - to_return_protect1, - ) - - -def update_model_choices(select_value): - model_ids = get_model_list() - model_ids_list = list(model_ids) - if select_value == "VR": - return {"choices": uvr5_names, "__type__": "update"} - elif select_value == "MDX": - return {"choices": model_ids_list, "__type__": "update"} - - -def save_drop_model_pth(dropbox): - file_path = dropbox.name - file_name = os.path.basename(file_path) - target_path = os.path.join("logs", "weights", os.path.basename(file_path)) - - if not file_name.endswith('.pth'): - print(i18n("The file does not have the .pth extension. Please upload the correct file.")) - return None - - shutil.move(file_path, target_path) - return target_path - -def extract_folder_name(file_name): - match = re.search(r'nprobe_(.*?)\.index', file_name) - - if match: - return match.group(1) - else: - return - -def save_drop_model_index(dropbox): - file_path = dropbox.name - file_name = os.path.basename(file_path) - folder_name = extract_folder_name(file_name) - - if not file_name.endswith('.index'): - print(i18n("The file does not have the .index extension. Please upload the correct file.")) - return None - - out_path = os.path.join("logs", folder_name) - os.mkdir(out_path) - - target_path = os.path.join(out_path, os.path.basename(file_path)) - - shutil.move(file_path, target_path) - return target_path - - -def download_model(): - gr.Markdown(value="# " + i18n("Download Model")) - gr.Markdown(value=i18n("It is used to download your inference models.")) - with gr.Row(): - model_url = gr.Textbox(label=i18n("Url:")) - with gr.Row(): - download_model_status_bar = gr.Textbox(label=i18n("Status:")) - with gr.Row(): - download_button = gr.Button(i18n("Download")) - download_button.click( - fn=load_downloaded_model, - inputs=[model_url], - outputs=[download_model_status_bar], - ) - gr.Markdown(value=i18n("You can also drop your files to load your model.")) - with gr.Row(): - dropbox_pth = gr.File(label=i18n("Drag your .pth file here:")) - dropbox_index = gr.File(label=i18n("Drag your .index file here:")) - - dropbox_pth.upload( - fn=save_drop_model_pth, - inputs=[dropbox_pth], - ) - dropbox_index.upload( - fn=save_drop_model_index, - inputs=[dropbox_index], - ) - - -def download_backup(): - gr.Markdown(value="# " + i18n("Download Backup")) - gr.Markdown(value=i18n("It is used to download your training backups.")) - with gr.Row(): - model_url = gr.Textbox(label=i18n("Url:")) - with gr.Row(): - download_model_status_bar = gr.Textbox(label=i18n("Status:")) - with gr.Row(): - download_button = gr.Button(i18n("Download")) - download_button.click( - fn=load_downloaded_backup, - inputs=[model_url], - outputs=[download_model_status_bar], - ) - - -def update_dataset_list(name): - new_datasets = [] - file_path = find_folder_parent(now_dir, "assets") - for foldername in os.listdir("./datasets"): - if "." not in foldername: - new_datasets.append( - os.path.join( - file_path, "datasets", foldername - ) - ) - return gr.Dropdown.update(choices=new_datasets) - - -def download_dataset(trainset_dir4): - gr.Markdown(value="# " + i18n("Download Dataset")) - gr.Markdown( - value=i18n( - "Download the dataset with the audios in a compatible format (.wav/.flac) to train your model." - ) - ) - with gr.Row(): - dataset_url = gr.Textbox(label=i18n("Url:")) - with gr.Row(): - load_dataset_status_bar = gr.Textbox(label=i18n("Status:")) - with gr.Row(): - load_dataset_button = gr.Button(i18n("Download")) - load_dataset_button.click( - fn=load_dowloaded_dataset, - inputs=[dataset_url], - outputs=[load_dataset_status_bar], - ) - load_dataset_status_bar.change(update_dataset_list, dataset_url, trainset_dir4) - - -def download_audio(): - gr.Markdown(value="# " + i18n("Download Audio")) - gr.Markdown( - value=i18n( - "Download audios of any format for use in inference (recommended for mobile users)." - ) - ) - with gr.Row(): - audio_url = gr.Textbox(label=i18n("Url:")) - with gr.Row(): - download_audio_status_bar = gr.Textbox(label=i18n("Status:")) - with gr.Row(): - download_button2 = gr.Button(i18n("Download")) - download_button2.click( - fn=load_downloaded_audio, - inputs=[audio_url], - outputs=[download_audio_status_bar], - ) - - -def youtube_separator(): - gr.Markdown(value="# " + i18n("Separate YouTube tracks")) - gr.Markdown( - value=i18n( - "Download audio from a YouTube video and automatically separate the vocal and instrumental tracks" - ) - ) - with gr.Row(): - input_url = gr.inputs.Textbox(label=i18n("Enter the YouTube link:")) - output_path = gr.Textbox( - label=i18n( - "Enter the path of the audio folder to be processed (copy it from the address bar of the file manager):" - ), - value=os.path.abspath(os.getcwd()).replace("\\", "/") + "/yt_downloads", - visible=False, - ) - advanced_settings_checkbox = gr.Checkbox( - value=False, - label=i18n("Advanced Settings"), - interactive=True, - ) - with gr.Row( - label=i18n("Advanced Settings"), visible=False, variant="compact" - ) as advanced_settings: - with gr.Column(): - model_select = gr.Radio( - label=i18n("Model Architecture:"), - choices=["VR", "MDX"], - value="VR", - interactive=True, - ) - model_choose = gr.Dropdown( - label=i18n( - "Model: (Be aware that in some models the named vocal will be the instrumental)" - ), - choices=uvr5_names, - value="HP5_only_main_vocal", - ) - with gr.Row(): - agg = gr.Slider( - minimum=0, - maximum=20, - step=1, - label=i18n("Vocal Extraction Aggressive"), - value=10, - interactive=True, - ) - with gr.Row(): - opt_vocal_root = gr.Textbox( - label=i18n("Specify the output folder for vocals:"), - value=((os.getcwd()).replace("\\", "/") + "/assets/audios"), - ) - opt_ins_root = gr.Textbox( - label=i18n("Specify the output folder for accompaniment:"), - value=((os.getcwd()).replace("\\", "/") + "/assets/audios/audio-others"), - ) - dir_wav_input = gr.Textbox( - label=i18n("Enter the path of the audio folder to be processed:"), - value=((os.getcwd()).replace("\\", "/") + "/yt_downloads"), - visible=False, - ) - format0 = gr.Radio( - label=i18n("Export file format"), - choices=["wav", "flac", "mp3", "m4a"], - value="wav", - visible=False, - interactive=True, - ) - wav_inputs = gr.File( - file_count="multiple", - label=i18n( - "You can also input audio files in batches. Choose one of the two options. Priority is given to reading from the folder." - ), - visible=False, - ) - model_select.change( - fn=update_model_choices, - inputs=model_select, - outputs=model_choose, - ) - with gr.Row(): - vc_output4 = gr.Textbox(label=i18n("Status:")) - vc_output5 = gr.Audio(label=i18n("Vocal"), type="filepath") - vc_output6 = gr.Audio(label=i18n("Instrumental"), type="filepath") - with gr.Row(): - but2 = gr.Button(i18n("Download and Separate")) - but2.click( - uvr, - [ - input_url, - output_path, - model_choose, - dir_wav_input, - opt_vocal_root, - wav_inputs, - opt_ins_root, - agg, - format0, - model_select, - ], - [vc_output4, vc_output5, vc_output6], - ) - - def toggle_advanced_settings(checkbox): - return {"visible": checkbox, "__type__": "update"} - - advanced_settings_checkbox.change( - fn=toggle_advanced_settings, - inputs=[advanced_settings_checkbox], - outputs=[advanced_settings], - ) - - diff --git "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT_bing\347\211\210.py" "b/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT_bing\347\211\210.py" deleted file mode 100644 index db5adb7992f765db3e5b0e7ecea7e71e44dbe855..0000000000000000000000000000000000000000 --- "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT_bing\347\211\210.py" +++ /dev/null @@ -1,106 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping -import requests -from bs4 import BeautifulSoup -from request_llm.bridge_all import model_info - - -def bing_search(query, proxies=None): - query = query - url = f"https://cn.bing.com/search?q={query}" - headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36'} - response = requests.get(url, headers=headers, proxies=proxies) - soup = BeautifulSoup(response.content, 'html.parser') - results = [] - for g in soup.find_all('li', class_='b_algo'): - anchors = g.find_all('a') - if anchors: - link = anchors[0]['href'] - if not link.startswith('http'): - continue - title = g.find('h2').text - item = {'title': title, 'link': link} - results.append(item) - - for r in results: - print(r['link']) - return results - - -def scrape_text(url, proxies) -> str: - """Scrape text from a webpage - - Args: - url (str): The URL to scrape text from - - Returns: - str: The scraped text - """ - headers = { - 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36', - 'Content-Type': 'text/plain', - } - try: - response = requests.get(url, headers=headers, proxies=proxies, timeout=8) - if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding - except: - return "无法连接到该网页" - soup = BeautifulSoup(response.text, "html.parser") - for script in soup(["script", "style"]): - script.extract() - text = soup.get_text() - lines = (line.strip() for line in text.splitlines()) - chunks = (phrase.strip() for line in lines for phrase in line.split(" ")) - text = "\n".join(chunk for chunk in chunks if chunk) - return text - -@CatchException -def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((f"请结合互联网信息回答以下问题:{txt}", - "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR!")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # ------------- < 第1步:爬取搜索引擎的结果 > ------------- - from toolbox import get_conf - proxies, = get_conf('proxies') - urls = bing_search(txt, proxies) - history = [] - if len(urls) == 0: - chatbot.append((f"结论:{txt}", - "[Local Message] 受到bing限制,无法从bing获取信息!")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - return - # ------------- < 第2步:依次访问网页 > ------------- - max_search_result = 8 # 最多收纳多少个网页的结果 - for index, url in enumerate(urls[:max_search_result]): - res = scrape_text(url['link'], proxies) - history.extend([f"第{index}份搜索结果:", res]) - chatbot.append([f"第{index}份搜索结果:", res[:500]+"......"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # ------------- < 第3步:ChatGPT综合 > ------------- - i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}" - i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token - inputs=i_say, - history=history, - max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4 - ) - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user=i_say, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。" - ) - chatbot[-1] = (i_say, gpt_say) - history.append(i_say);history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 - diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/abinet_academic.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/abinet_academic.py deleted file mode 100644 index 4abb87a6ee576a6c8a299d30baf4fee2ae56a1bf..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/abinet_academic.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_20e.py', - '../../_base_/recog_pipelines/abinet_pipeline.py', - '../../_base_/recog_models/abinet.py', - # '../../_base_/recog_datasets/ST_MJ_alphanumeric_train.py', - '../../_base_/recog_datasets/toy_data.py' - # '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -data = dict( - samples_per_gpu=192, - workers_per_gpu=8, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/LucasCodeBreak/MusicGen/tests/modules/test_rope.py b/spaces/LucasCodeBreak/MusicGen/tests/modules/test_rope.py deleted file mode 100644 index 067c6f067acbf27fb0fef5c2b812c22474c4fcd0..0000000000000000000000000000000000000000 --- a/spaces/LucasCodeBreak/MusicGen/tests/modules/test_rope.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.modules.rope import RotaryEmbedding -from audiocraft.modules.transformer import StreamingTransformer, set_efficient_attention_backend - - -def test_rope(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_rope_io_dtypes(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32) - rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64) - - # Test bfloat16 inputs w/ both 32 and 64 precision rope. - xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - - # Test float32 inputs w/ both 32 and 64 precision rope. - xq_32 = torch.rand((B, T, H, C)).to(torch.float32) - xk_32 = torch.rand((B, T, H, C)).to(torch.float32) - xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - - -def test_transformer_with_rope(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - for pos in ['rope', 'sin_rope']: - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding=pos) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - out = tr(x) - assert list(out.shape) == list(x.shape) - - -@torch.no_grad() -def test_rope_streaming(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, causal=True, dropout=0., - custom=True, positional_embedding='rope') - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -@torch.no_grad() -def test_rope_streaming_past_context(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - - for context in [None, 10]: - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=True, - dropout=0., positional_embedding='rope') - tr.eval() - - steps = 20 - x = torch.randn(3, steps, 16) - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_rope_memory_efficient(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - # Check at float precision b/c this is the rope default. - assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm() - - -def test_rope_with_xpos(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_positional_scale(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert torch.allclose(xq, xq_out) - assert torch.allclose(xk, xk_out) diff --git a/spaces/Mahiruoshi/BangDream-Bert-VITS2/text/chinese_bert.py b/spaces/Mahiruoshi/BangDream-Bert-VITS2/text/chinese_bert.py deleted file mode 100644 index 8159425df4bf7e577008b22f44e84f3147fdce14..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/BangDream-Bert-VITS2/text/chinese_bert.py +++ /dev/null @@ -1,100 +0,0 @@ -import torch -import sys -from transformers import AutoTokenizer, AutoModelForMaskedLM - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") - -models = dict() - - -def get_bert_feature(text, word2ph, device=None): - if ( - sys.platform == "darwin" - and torch.backends.mps.is_available() - and device == "cpu" - ): - device = "mps" - if not device: - device = "cuda" - if device not in models.keys(): - models[device] = AutoModelForMaskedLM.from_pretrained( - "./bert/chinese-roberta-wwm-ext-large" - ).to(device) - with torch.no_grad(): - inputs = tokenizer(text, return_tensors="pt") - for i in inputs: - inputs[i] = inputs[i].to(device) - res = models[device](**inputs, output_hidden_states=True) - res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text) + 2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - return phone_level_feature.T - - -if __name__ == "__main__": - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [ - 1, - 2, - 1, - 2, - 2, - 1, - 2, - 2, - 1, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 2, - 1, - 1, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 2, - 1, - 2, - 2, - 2, - 2, - 1, - ] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) diff --git a/spaces/Mahiruoshi/MyGO_VIts-bert/train_ms.py b/spaces/Mahiruoshi/MyGO_VIts-bert/train_ms.py deleted file mode 100644 index d17da759ac5f25e865f69458280aa28db3b56e1d..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/MyGO_VIts-bert/train_ms.py +++ /dev/null @@ -1,598 +0,0 @@ -# flake8: noqa: E402 - -import os -import torch -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler, -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import generator_loss, discriminator_loss, feature_loss, kl_loss -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = ( - True # If encontered training problem,please try to disable TF32. -) -torch.set_float32_matmul_precision("medium") -torch.backends.cudnn.benchmark = True -torch.backends.cuda.sdp_kernel("flash") -torch.backends.cuda.enable_flash_sdp(True) -torch.backends.cuda.enable_mem_efficient_sdp( - True -) # Not available if torch version is lower than 2.0 -torch.backends.cuda.enable_math_sdp(True) -global_step = 0 - -import os - -os.environ['MASTER_ADDR'] = '127.0.0.1' -os.environ['MASTER_PORT'] = '8880' -os.environ['WORLD_SIZE'] = '1' -os.environ['RANK'] = '0' - -def run(): - dist.init_process_group( - backend="gloo", - init_method="env://", # Due to some training problem,we proposed to use gloo instead of nccl. - ) # Use torchrun instead of mp.spawn - rank = dist.get_rank() - n_gpus = dist.get_world_size() - hps = utils.get_hparams() - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True, - ) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader( - train_dataset, - num_workers=16, - shuffle=False, - pin_memory=True, - collate_fn=collate_fn, - batch_sampler=train_sampler, - persistent_workers=True, - prefetch_factor=4, - ) # DataLoader config could be adjusted. - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader( - eval_dataset, - num_workers=0, - shuffle=False, - batch_size=1, - pin_memory=True, - drop_last=False, - collate_fn=collate_fn, - ) - if ( - "use_noise_scaled_mas" in hps.model.keys() - and hps.model.use_noise_scaled_mas is True - ): - print("Using noise scaled MAS for VITS2") - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if ( - "use_duration_discriminator" in hps.model.keys() - and hps.model.use_duration_discriminator is True - ): - print("Using duration discriminator for VITS2") - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if ( - "use_spk_conditioned_encoder" in hps.model.keys() - and hps.model.use_spk_conditioned_encoder is True - ): - if hps.data.n_speakers == 0: - raise ValueError( - "n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model" - ) - else: - print("Using normal encoder for VITS1") - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial=mas_noise_scale_initial, - noise_scale_delta=noise_scale_delta, - **hps.model, - ).cuda(rank) - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - try: - if net_dur_disc is not None: - _, _, dur_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), - net_dur_disc, - optim_dur_disc, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - _, optim_g, g_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), - net_g, - optim_g, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - _, optim_d, d_resume_lr, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), - net_d, - optim_d, - skip_optimizer=hps.train.skip_optimizer - if "skip_optimizer" in hps.train - else True, - ) - if not optim_g.param_groups[0].get("initial_lr"): - optim_g.param_groups[0]["initial_lr"] = g_resume_lr - if not optim_d.param_groups[0].get("initial_lr"): - optim_d.param_groups[0]["initial_lr"] = d_resume_lr - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR( - optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR( - optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - if net_dur_disc is not None: - if not optim_dur_disc.param_groups[0].get("initial_lr"): - optim_dur_disc.param_groups[0]["initial_lr"] = dur_resume_lr - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR( - optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d, net_dur_disc], - [optim_g, optim_d, optim_dur_disc], - [scheduler_g, scheduler_d, scheduler_dur_disc], - scaler, - [train_loader, eval_loader], - logger, - [writer, writer_eval], - ) - else: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d, net_dur_disc], - [optim_g, optim_d, optim_dur_disc], - [scheduler_g, scheduler_d, scheduler_dur_disc], - scaler, - [train_loader, None], - None, - None, - ) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate( - rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers -): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, ( - x, - x_lengths, - spec, - spec_lengths, - y, - y_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = ( - net_g.module.mas_noise_scale_initial - - net_g.module.noise_scale_delta * global_step - ) - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda( - rank, non_blocking=True - ) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda( - rank, non_blocking=True - ) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda( - rank, non_blocking=True - ) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - ja_bert = ja_bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - ( - y_hat, - l_length, - attn, - ids_slice, - x_mask, - z_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - (hidden_x, logw, logw_), - ) = net_g( - x, - x_lengths, - spec, - spec_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - y_mel = commons.slice_segments( - mel, ids_slice, hps.train.segment_size // hps.data.hop_length - ) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - - y = commons.slice_segments( - y, ids_slice * hps.data.hop_length, hps.train.segment_size - ) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss( - y_d_hat_r, y_d_hat_g - ) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc( - hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach() - ) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - ( - loss_dur_disc, - losses_dur_disc_r, - losses_dur_disc_g, - ) = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]["lr"] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info( - "Train Epoch: {} [{:.0f}%]".format( - epoch, 100.0 * batch_idx / len(train_loader) - ) - ) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = { - "loss/g/total": loss_gen_all, - "loss/d/total": loss_disc_all, - "learning_rate": lr, - "grad_norm_d": grad_norm_d, - "grad_norm_g": grad_norm_g, - } - scalar_dict.update( - { - "loss/g/fm": loss_fm, - "loss/g/mel": loss_mel, - "loss/g/dur": loss_dur, - "loss/g/kl": loss_kl, - } - ) - scalar_dict.update( - {"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)} - ) - scalar_dict.update( - {"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)} - ) - scalar_dict.update( - {"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)} - ) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy( - y_mel[0].data.cpu().numpy() - ), - "slice/mel_gen": utils.plot_spectrogram_to_numpy( - y_hat_mel[0].data.cpu().numpy() - ), - "all/mel": utils.plot_spectrogram_to_numpy( - mel[0].data.cpu().numpy() - ), - "all/attn": utils.plot_alignment_to_numpy( - attn[0, 0].data.cpu().numpy() - ), - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict, - ) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint( - net_g, - optim_g, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step)), - ) - utils.save_checkpoint( - net_d, - optim_d, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step)), - ) - if net_dur_disc is not None: - utils.save_checkpoint( - net_dur_disc, - optim_dur_disc, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step)), - ) - keep_ckpts = getattr(hps.train, "keep_ckpts", 5) - if keep_ckpts > 0: - utils.clean_checkpoints( - path_to_models=hps.model_dir, - n_ckpts_to_keep=keep_ckpts, - sort_by_time=True, - ) - - global_step += 1 - - if rank == 0: - logger.info("====> Epoch: {}".format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, ( - x, - x_lengths, - spec, - spec_lengths, - y, - y_lengths, - speakers, - tone, - language, - bert, - ja_bert, - ) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - ja_bert = ja_bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer( - x, - x_lengths, - speakers, - tone, - language, - bert, - ja_bert, - y=spec, - max_len=1000, - sdp_ratio=0.0 if not use_sdp else 1.0, - ) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - image_dict.update( - { - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy( - y_hat_mel[0].cpu().numpy() - ) - } - ) - audio_dict.update( - { - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[ - 0, :, : y_hat_lengths[0] - ] - } - ) - image_dict.update( - { - f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy( - mel[0].cpu().numpy() - ) - } - ) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, : y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate, - ) - generator.train() - - -if __name__ == "__main__": - run() diff --git a/spaces/MatrixYao/how_many_data_points_zh/README.md b/spaces/MatrixYao/how_many_data_points_zh/README.md deleted file mode 100644 index 0ff4eaa2aacdff74cf7b4367ca8d6d3f99752af0..0000000000000000000000000000000000000000 --- a/spaces/MatrixYao/how_many_data_points_zh/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: How Many Data Points -emoji: 🦀 -colorFrom: red -colorTo: yellow -sdk: docker -pinned: false -app_port: 5006 -duplicated_from: teven-projects/how_many_data_points ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Matthijs/mms-tts-demo/uroman/bin/uroman.pl b/spaces/Matthijs/mms-tts-demo/uroman/bin/uroman.pl deleted file mode 100644 index f1182aee6e5c3422882150b5babeec664b689401..0000000000000000000000000000000000000000 --- a/spaces/Matthijs/mms-tts-demo/uroman/bin/uroman.pl +++ /dev/null @@ -1,138 +0,0 @@ -#!/usr/bin/perl -w - -# uroman Nov. 12, 2015 - Apr. 23, 2021 -$version = "v1.2.8"; -# Author: Ulf Hermjakob - -# Usage: uroman.pl {-l [ara|bel|bul|deu|ell|eng|fas|grc|heb|kaz|kir|lav|lit|mkd|mkd2|oss|pnt|rus|srp|srp2|tur|uig|ukr|yid]} {--chart|--offset-mapping} {--no-cache} {--workset} < STDIN -# Example: cat workset.txt | uroman.pl --offset-mapping --workset - -$|=1; - -use FindBin; -use Cwd "abs_path"; -use File::Basename qw(dirname); -use File::Spec; - -my $bin_dir = abs_path(dirname($0)); -my $root_dir = File::Spec->catfile($bin_dir, File::Spec->updir()); -my $data_dir = File::Spec->catfile($root_dir, "data"); -my $lib_dir = File::Spec->catfile($root_dir, "lib"); - -use lib "$FindBin::Bin/../lib"; -use NLP::Chinese; -use NLP::Romanizer; -use NLP::UTF8; -use NLP::utilities; -use JSON; -$chinesePM = NLP::Chinese; -$romanizer = NLP::Romanizer; -$util = NLP::utilities; -%ht = (); -%pinyin_ht = (); -$lang_code = ""; -$return_chart_p = 0; -$return_offset_mappings_p = 0; -$workset_p = 0; -$cache_rom_tokens_p = 1; - -$script_data_filename = File::Spec->catfile($data_dir, "Scripts.txt"); -$unicode_data_overwrite_filename = File::Spec->catfile($data_dir, "UnicodeDataOverwrite.txt"); -$unicode_data_filename = File::Spec->catfile($data_dir, "UnicodeData.txt"); -$romanization_table_filename = File::Spec->catfile($data_dir, "romanization-table.txt"); -$chinese_tonal_pinyin_filename = File::Spec->catfile($data_dir, "Chinese_to_Pinyin.txt"); - -while (@ARGV) { - $arg = shift @ARGV; - if ($arg =~ /^-+(l|lc|lang-code)$/) { - $lang_code = lc (shift @ARGV || "") - } elsif ($arg =~ /^-+chart$/i) { - $return_chart_p = 1; - } elsif ($arg =~ /^-+workset$/i) { - $workset_p = 1; - } elsif ($arg =~ /^-+offset[-_]*map/i) { - $return_offset_mappings_p = 1; - } elsif ($arg =~ /^-+unicode[-_]?data/i) { - $filename = shift @ARGV; - if (-r $filename) { - $unicode_data_filename = $filename; - } else { - print STDERR "Ignoring invalid UnicodeData filename $filename\n"; - } - } elsif ($arg =~ /^-+(no-tok-cach|no-cach)/i) { - $cache_rom_tokens_p = 0; - } else { - print STDERR "Ignoring unrecognized arg $arg\n"; - } -} - -$romanizer->load_script_data(*ht, $script_data_filename); -$romanizer->load_unicode_data(*ht, $unicode_data_filename); -$romanizer->load_unicode_overwrite_romanization(*ht, $unicode_data_overwrite_filename); -$romanizer->load_romanization_table(*ht, $romanization_table_filename); -$chinese_to_pinyin_not_yet_loaded_p = 1; -$current_date = $util->datetime("dateTtime"); -$lang_code_clause = ($lang_code) ? " \"lang-code\":\"$lang_code\",\n" : ""; - -print "{\n \"romanizer\":\"uroman $version (Ulf Hermjakob, USC/ISI)\",\n \"date\":\"$current_date\",\n$lang_code_clause \"romanization\": [\n" if $return_chart_p; -my $line_number = 0; -my $chart_result = ""; -while (<>) { - $line_number++; - my $line = $_; - my $snt_id = ""; - if ($workset_p) { - next if $line =~ /^#/; - if (($i_value, $s_value) = ($line =~ /^(\S+\.\d+)\s(.*)$/)) { - $snt_id = $i_value; - $line = "$s_value\n"; - } else { - next; - } - } - if ($chinese_to_pinyin_not_yet_loaded_p && $chinesePM->string_contains_utf8_cjk_unified_ideograph_p($line)) { - $chinesePM->read_chinese_tonal_pinyin_files(*pinyin_ht, $chinese_tonal_pinyin_filename); - $chinese_to_pinyin_not_yet_loaded_p = 0; - } - if ($return_chart_p) { - print $chart_result; - *chart_ht = $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "return chart", $line_number); - $chart_result = $romanizer->chart_to_json_romanization_elements(0, $chart_ht{N_CHARS}, *chart_ht, $line_number); - } elsif ($return_offset_mappings_p) { - ($best_romanization, $offset_mappings) = $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "return offset mappings", $line_number, 0); - print "::snt-id $snt_id\n" if $workset_p; - print "::orig $line"; - print "::rom $best_romanization\n"; - print "::align $offset_mappings\n\n"; - } elsif ($cache_rom_tokens_p) { - print $romanizer->romanize_by_token_with_caching($line, $lang_code, "", *ht, *pinyin_ht, 0, "", $line_number) . "\n"; - } else { - print $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "", $line_number) . "\n"; - } -} -$chart_result =~ s/,(\s*)$/$1/; -print $chart_result; -print " ]\n}\n" if $return_chart_p; - -$dev_test_p = 0; -if ($dev_test_p) { - $n_suspicious_code_points = 0; - $n_instances = 0; - foreach $char_name (sort { hex($ht{UTF_NAME_TO_UNICODE}->{$a}) <=> hex($ht{UTF_NAME_TO_UNICODE}->{$b}) } - keys %{$ht{SUSPICIOUS_ROMANIZATION}}) { - $unicode_value = $ht{UTF_NAME_TO_UNICODE}->{$char_name}; - $utf8_string = $ht{UTF_NAME_TO_CODE}->{$char_name}; - foreach $romanization (sort keys %{$ht{SUSPICIOUS_ROMANIZATION}->{$char_name}}) { - $count = $ht{SUSPICIOUS_ROMANIZATION}->{$char_name}->{$romanization}; - $s = ($count == 1) ? "" : "s"; - print STDERR "*** Suspiciously lengthy romanization:\n" unless $n_suspicious_code_points; - print STDERR "::s $utf8_string ::t $romanization ::comment $char_name (U+$unicode_value)\n"; - $n_suspicious_code_points++; - $n_instances += $count; - } - } - print STDERR " *** Total of $n_suspicious_code_points suspicious code points ($n_instances instance$s)\n" if $n_suspicious_code_points; -} - -exit 0; - diff --git a/spaces/MaxReimann/Whitebox-Style-Transfer-Editing/demo_config.py b/spaces/MaxReimann/Whitebox-Style-Transfer-Editing/demo_config.py deleted file mode 100644 index f9defdc676c5027ea583ac4a7235acb8abd96351..0000000000000000000000000000000000000000 --- a/spaces/MaxReimann/Whitebox-Style-Transfer-Editing/demo_config.py +++ /dev/null @@ -1,2 +0,0 @@ -HUGGING_FACE=True # if run in hugging face. Huggingface uses extra server task for optim -WORKER_URL="http://94.130.222.54:8080" diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/io.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/io.py deleted file mode 100644 index aaefde58aa3ea5b58f86249ce7e1c40c186eb8dd..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/io.py +++ /dev/null @@ -1,151 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from io import BytesIO, StringIO -from pathlib import Path - -from ..utils import is_list_of, is_str -from .file_client import FileClient -from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler - -file_handlers = { - 'json': JsonHandler(), - 'yaml': YamlHandler(), - 'yml': YamlHandler(), - 'pickle': PickleHandler(), - 'pkl': PickleHandler() -} - - -def load(file, file_format=None, file_client_args=None, **kwargs): - """Load data from json/yaml/pickle files. - - This method provides a unified api for loading data from serialized files. - - Note: - In v1.3.16 and later, ``load`` supports loading data from serialized - files those can be storaged in different backends. - - Args: - file (str or :obj:`Path` or file-like object): Filename or a file-like - object. - file_format (str, optional): If not specified, the file format will be - inferred from the file extension, otherwise use the specified one. - Currently supported formats include "json", "yaml/yml" and - "pickle/pkl". - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> load('/path/of/your/file') # file is storaged in disk - >>> load('https://path/of/your/file') # file is storaged in Internet - >>> load('s3://path/of/your/file') # file is storaged in petrel - - Returns: - The content from the file. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None and is_str(file): - file_format = file.split('.')[-1] - if file_format not in file_handlers: - raise TypeError(f'Unsupported format: {file_format}') - - handler = file_handlers[file_format] - if is_str(file): - file_client = FileClient.infer_client(file_client_args, file) - if handler.str_like: - with StringIO(file_client.get_text(file)) as f: - obj = handler.load_from_fileobj(f, **kwargs) - else: - with BytesIO(file_client.get(file)) as f: - obj = handler.load_from_fileobj(f, **kwargs) - elif hasattr(file, 'read'): - obj = handler.load_from_fileobj(file, **kwargs) - else: - raise TypeError('"file" must be a filepath str or a file-object') - return obj - - -def dump(obj, file=None, file_format=None, file_client_args=None, **kwargs): - """Dump data to json/yaml/pickle strings or files. - - This method provides a unified api for dumping data as strings or to files, - and also supports custom arguments for each file format. - - Note: - In v1.3.16 and later, ``dump`` supports dumping data as strings or to - files which is saved to different backends. - - Args: - obj (any): The python object to be dumped. - file (str or :obj:`Path` or file-like object, optional): If not - specified, then the object is dumped to a str, otherwise to a file - specified by the filename or file-like object. - file_format (str, optional): Same as :func:`load`. - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - - Examples: - >>> dump('hello world', '/path/of/your/file') # disk - >>> dump('hello world', 's3://path/of/your/file') # ceph or petrel - - Returns: - bool: True for success, False otherwise. - """ - if isinstance(file, Path): - file = str(file) - if file_format is None: - if is_str(file): - file_format = file.split('.')[-1] - elif file is None: - raise ValueError( - 'file_format must be specified since file is None') - if file_format not in file_handlers: - raise TypeError(f'Unsupported format: {file_format}') - - handler = file_handlers[file_format] - if file is None: - return handler.dump_to_str(obj, **kwargs) - elif is_str(file): - file_client = FileClient.infer_client(file_client_args, file) - if handler.str_like: - with StringIO() as f: - handler.dump_to_fileobj(obj, f, **kwargs) - file_client.put_text(f.getvalue(), file) - else: - with BytesIO() as f: - handler.dump_to_fileobj(obj, f, **kwargs) - file_client.put(f.getvalue(), file) - elif hasattr(file, 'write'): - handler.dump_to_fileobj(obj, file, **kwargs) - else: - raise TypeError('"file" must be a filename str or a file-object') - - -def _register_handler(handler, file_formats): - """Register a handler for some file extensions. - - Args: - handler (:obj:`BaseFileHandler`): Handler to be registered. - file_formats (str or list[str]): File formats to be handled by this - handler. - """ - if not isinstance(handler, BaseFileHandler): - raise TypeError( - f'handler must be a child of BaseFileHandler, not {type(handler)}') - if isinstance(file_formats, str): - file_formats = [file_formats] - if not is_list_of(file_formats, str): - raise TypeError('file_formats must be a str or a list of str') - for ext in file_formats: - file_handlers[ext] = handler - - -def register_handler(file_formats, **kwargs): - - def wrap(cls): - _register_handler(cls(**kwargs), file_formats) - return cls - - return wrap diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/utils/weight_init.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/utils/weight_init.py deleted file mode 100644 index 38141ba3d61f64ddfc0a31574b4648cbad96d7dd..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/utils/weight_init.py +++ /dev/null @@ -1,62 +0,0 @@ -"""Modified from https://github.com/rwightman/pytorch-image- -models/blob/master/timm/models/layers/drop.py.""" - -import math -import warnings - -import torch - - -def _no_grad_trunc_normal_(tensor, mean, std, a, b): - """Reference: https://people.sc.fsu.edu/~jburkardt/presentations - /truncated_normal.pdf""" - - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - lower_bound = norm_cdf((a - mean) / std) - upper_bound = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [l, u], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * lower_bound - 1, 2 * upper_bound - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - Args: - tensor (``torch.Tensor``): an n-dimensional `torch.Tensor` - mean (float): the mean of the normal distribution - std (float): the standard deviation of the normal distribution - a (float): the minimum cutoff value - b (float): the maximum cutoff value - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) diff --git a/spaces/MercuryLeafer/img-to-music/share_btn.py b/spaces/MercuryLeafer/img-to-music/share_btn.py deleted file mode 100644 index 351a8f6252414dc48fd9972867f875a002731c19..0000000000000000000000000000000000000000 --- a/spaces/MercuryLeafer/img-to-music/share_btn.py +++ /dev/null @@ -1,104 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - async function getOutputMusicFile(audioEL){ - const res = await fetch(audioEL.src); - const blob = await res.blob(); - const audioId = Date.now() % 200; - const fileName = `img-to-music-${{audioId}}.wav`; - const musicBlob = new File([blob], fileName, { type: 'audio/wav' }); - console.log(musicBlob); - return musicBlob; - } - - async function audioToBase64(audioFile) { - return new Promise((resolve, reject) => { - let reader = new FileReader(); - reader.readAsDataURL(audioFile); - reader.onload = () => resolve(reader.result); - reader.onerror = error => reject(error); - - }); - } - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgEl = gradioEl.querySelector('#input-img img'); - const prompts = gradioEl.querySelector('#prompts_out textarea').value; - const outputMusic = gradioEl.querySelector('#music-output audio'); - const outputMusic_src = gradioEl.querySelector('#music-output audio').src; - const outputMusic_name = outputMusic_src.split('/').pop(); - let titleTxt = outputMusic_name; - //if(titleTxt.length > 100){ - // titleTxt = titleTxt.slice(0, 100) + ' ...'; - //} - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputMusic){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const inputFile = await getInputImgFile(inputImgEl); - const urlInputImg = await uploadFile(inputFile); - const musicFile = await getOutputMusicFile(outputMusic); - const dataOutputMusic = await uploadFile(musicFile); - - const descriptionMd = `#### Input img: - - -#### Prompts out: -${prompts} - -#### Music: - - -`; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/img-to-music/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/Miuzarte/SUI-svc-4.0/README.md b/spaces/Miuzarte/SUI-svc-4.0/README.md deleted file mode 100644 index 3f28cf165ca4552bfe2d787e14b47d3bc52673f1..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-4.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI岁己(歌声变声器)第二代 -emoji: 🕊 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/layers/transformer_layers.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/layers/transformer_layers.py deleted file mode 100644 index 8be138d5c5af89b96f27f3646b14a60302659105..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/layers/transformer_layers.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmengine.model import BaseModule - -from mmocr.models.common.modules import (MultiHeadAttention, - PositionwiseFeedForward) - - -class TFEncoderLayer(BaseModule): - """Transformer Encoder Layer. - - Args: - d_model (int): The number of expected features - in the decoder inputs (default=512). - d_inner (int): The dimension of the feedforward - network model (default=256). - n_head (int): The number of heads in the - multiheadattention models (default=8). - d_k (int): Total number of features in key. - d_v (int): Total number of features in value. - dropout (float): Dropout layer on attn_output_weights. - qkv_bias (bool): Add bias in projection layer. Default: False. - act_cfg (dict): Activation cfg for feedforward module. - operation_order (tuple[str]): The execution order of operation - in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm') - or ('norm', 'self_attn', 'norm', 'ffn'). - Default:None. - """ - - def __init__(self, - d_model=512, - d_inner=256, - n_head=8, - d_k=64, - d_v=64, - dropout=0.1, - qkv_bias=False, - act_cfg=dict(type='mmengine.GELU'), - operation_order=None): - super().__init__() - self.attn = MultiHeadAttention( - n_head, d_model, d_k, d_v, qkv_bias=qkv_bias, dropout=dropout) - self.norm1 = nn.LayerNorm(d_model) - self.mlp = PositionwiseFeedForward( - d_model, d_inner, dropout=dropout, act_cfg=act_cfg) - self.norm2 = nn.LayerNorm(d_model) - - self.operation_order = operation_order - if self.operation_order is None: - self.operation_order = ('norm', 'self_attn', 'norm', 'ffn') - - assert self.operation_order in [('norm', 'self_attn', 'norm', 'ffn'), - ('self_attn', 'norm', 'ffn', 'norm')] - - def forward(self, x, mask=None): - if self.operation_order == ('self_attn', 'norm', 'ffn', 'norm'): - residual = x - x = residual + self.attn(x, x, x, mask) - x = self.norm1(x) - - residual = x - x = residual + self.mlp(x) - x = self.norm2(x) - elif self.operation_order == ('norm', 'self_attn', 'norm', 'ffn'): - residual = x - x = self.norm1(x) - x = residual + self.attn(x, x, x, mask) - - residual = x - x = self.norm2(x) - x = residual + self.mlp(x) - - return x - - -class TFDecoderLayer(nn.Module): - """Transformer Decoder Layer. - - Args: - d_model (int): The number of expected features - in the decoder inputs (default=512). - d_inner (int): The dimension of the feedforward - network model (default=256). - n_head (int): The number of heads in the - multiheadattention models (default=8). - d_k (int): Total number of features in key. - d_v (int): Total number of features in value. - dropout (float): Dropout layer on attn_output_weights. - qkv_bias (bool): Add bias in projection layer. Default: False. - act_cfg (dict): Activation cfg for feedforward module. - operation_order (tuple[str]): The execution order of operation - in transformer. Such as ('self_attn', 'norm', 'enc_dec_attn', - 'norm', 'ffn', 'norm') or ('norm', 'self_attn', 'norm', - 'enc_dec_attn', 'norm', 'ffn'). - Default:None. - """ - - def __init__(self, - d_model=512, - d_inner=256, - n_head=8, - d_k=64, - d_v=64, - dropout=0.1, - qkv_bias=False, - act_cfg=dict(type='mmengine.GELU'), - operation_order=None): - super().__init__() - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.norm3 = nn.LayerNorm(d_model) - - self.self_attn = MultiHeadAttention( - n_head, d_model, d_k, d_v, dropout=dropout, qkv_bias=qkv_bias) - - self.enc_attn = MultiHeadAttention( - n_head, d_model, d_k, d_v, dropout=dropout, qkv_bias=qkv_bias) - - self.mlp = PositionwiseFeedForward( - d_model, d_inner, dropout=dropout, act_cfg=act_cfg) - - self.operation_order = operation_order - if self.operation_order is None: - self.operation_order = ('norm', 'self_attn', 'norm', - 'enc_dec_attn', 'norm', 'ffn') - assert self.operation_order in [ - ('norm', 'self_attn', 'norm', 'enc_dec_attn', 'norm', 'ffn'), - ('self_attn', 'norm', 'enc_dec_attn', 'norm', 'ffn', 'norm') - ] - - def forward(self, - dec_input, - enc_output, - self_attn_mask=None, - dec_enc_attn_mask=None): - if self.operation_order == ('self_attn', 'norm', 'enc_dec_attn', - 'norm', 'ffn', 'norm'): - dec_attn_out = self.self_attn(dec_input, dec_input, dec_input, - self_attn_mask) - dec_attn_out += dec_input - dec_attn_out = self.norm1(dec_attn_out) - - enc_dec_attn_out = self.enc_attn(dec_attn_out, enc_output, - enc_output, dec_enc_attn_mask) - enc_dec_attn_out += dec_attn_out - enc_dec_attn_out = self.norm2(enc_dec_attn_out) - - mlp_out = self.mlp(enc_dec_attn_out) - mlp_out += enc_dec_attn_out - mlp_out = self.norm3(mlp_out) - elif self.operation_order == ('norm', 'self_attn', 'norm', - 'enc_dec_attn', 'norm', 'ffn'): - dec_input_norm = self.norm1(dec_input) - dec_attn_out = self.self_attn(dec_input_norm, dec_input_norm, - dec_input_norm, self_attn_mask) - dec_attn_out += dec_input - - enc_dec_attn_in = self.norm2(dec_attn_out) - enc_dec_attn_out = self.enc_attn(enc_dec_attn_in, enc_output, - enc_output, dec_enc_attn_mask) - enc_dec_attn_out += dec_attn_out - - mlp_out = self.mlp(self.norm3(enc_dec_attn_out)) - mlp_out += enc_dec_attn_out - - return mlp_out diff --git a/spaces/NATSpeech/PortaSpeech/data_gen/tts/binarizer_zh.py b/spaces/NATSpeech/PortaSpeech/data_gen/tts/binarizer_zh.py deleted file mode 100644 index 7e47ae4b56ce0235bd06c02b88f1ddd942122772..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/data_gen/tts/binarizer_zh.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy as np -from data_gen.tts.base_binarizer import BaseBinarizer - - -class ZhBinarizer(BaseBinarizer): - @staticmethod - def process_align(tg_fn, item): - BaseBinarizer.process_align(tg_fn, item) - # char-level pitch - if 'f0' in item: - ph_list = item['ph'].split(" ") - item['f0_ph'] = np.array([0 for _ in item['f0']], dtype=float) - char_start_idx = 0 - f0s_char = [] - for idx, (f0_, ph_idx) in enumerate(zip(item['f0'], item['mel2ph'])): - is_pinyin = ph_list[ph_idx - 1][0].isalpha() - if not is_pinyin or ph_idx - item['mel2ph'][idx - 1] > 1: - if len(f0s_char) > 0: - item['f0_ph'][char_start_idx:idx] = sum(f0s_char) / len(f0s_char) - f0s_char = [] - char_start_idx = idx - if not is_pinyin: - char_start_idx += 1 - if f0_ > 0: - f0s_char.append(f0_) diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/albert/__init__.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/albert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/model_training_utils.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/model_training_utils.py deleted file mode 100644 index f0fe67615726906a6b1d3ef38a5ca9acfe8502de..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/model_training_utils.py +++ /dev/null @@ -1,572 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""A light weight utilities to train NLP models.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import json -import os -import tempfile - -from absl import logging -import tensorflow as tf -from tensorflow.python.util import deprecation -from official.staging.training import grad_utils -from official.utils.misc import distribution_utils - -_SUMMARY_TXT = 'training_summary.txt' -_MIN_SUMMARY_STEPS = 10 - - -def _should_export_checkpoint(strategy): - return (not strategy) or strategy.extended.should_checkpoint - - -def _should_export_summary(strategy): - return (not strategy) or strategy.extended.should_save_summary - - -def _save_checkpoint(strategy, checkpoint, model_dir, checkpoint_prefix): - """Saves model to with provided checkpoint prefix.""" - - if _should_export_checkpoint(strategy): - checkpoint_path = os.path.join(model_dir, checkpoint_prefix) - saved_path = checkpoint.save(checkpoint_path) - logging.info('Saving model as TF checkpoint: %s', saved_path) - else: - # In multi worker training we need every worker to save checkpoint, because - # variables can trigger synchronization on read and synchronization needs - # all workers to participate. To avoid workers overriding each other we save - # to a temporary directory on non-chief workers. - tmp_dir = tempfile.mkdtemp() - checkpoint.save(os.path.join(tmp_dir, 'ckpt')) - tf.io.gfile.rmtree(tmp_dir) - return - - -def _get_input_iterator(input_fn, strategy): - """Returns distributed dataset iterator.""" - # When training with TPU pods, datasets needs to be cloned across - # workers. Since Dataset instance cannot be cloned in eager mode, we instead - # pass callable that returns a dataset. - if not callable(input_fn): - raise ValueError('`input_fn` should be a closure that returns a dataset.') - iterator = iter( - strategy.experimental_distribute_datasets_from_function(input_fn)) - return iterator - - -def _float_metric_value(metric): - """Gets the value of a float-value keras metric.""" - return metric.result().numpy().astype(float) - - -def steps_to_run(current_step, steps_per_epoch, steps_per_loop): - """Calculates steps to run on device.""" - if steps_per_loop <= 0: - raise ValueError('steps_per_loop should be positive integer.') - if steps_per_loop == 1: - return steps_per_loop - remainder_in_epoch = current_step % steps_per_epoch - if remainder_in_epoch != 0: - return min(steps_per_epoch - remainder_in_epoch, steps_per_loop) - else: - return steps_per_loop - - -def write_txt_summary(training_summary, summary_dir): - """Writes a summary text file to record stats.""" - if not tf.io.gfile.exists(summary_dir): - tf.io.gfile.mkdir(summary_dir) - summary_path = os.path.join(summary_dir, _SUMMARY_TXT) - with tf.io.gfile.GFile(summary_path, 'wb') as f: - logging.info('Training Summary: \n%s', str(training_summary)) - f.write(json.dumps(training_summary, indent=4)) - - -@deprecation.deprecated( - None, 'This function is deprecated. Please use Keras compile/fit instead.') -def run_customized_training_loop( - # pylint: disable=invalid-name - _sentinel=None, - # pylint: enable=invalid-name - strategy=None, - model_fn=None, - loss_fn=None, - scale_loss=True, - model_dir=None, - train_input_fn=None, - steps_per_epoch=None, - num_eval_per_epoch=1, - steps_per_loop=None, - epochs=1, - eval_input_fn=None, - eval_steps=None, - metric_fn=None, - init_checkpoint=None, - custom_callbacks=None, - run_eagerly=False, - sub_model_export_name=None, - explicit_allreduce=False, - pre_allreduce_callbacks=None, - post_allreduce_callbacks=None, - train_summary_interval=0): - """Run BERT pretrain model training using low-level API. - - Arguments: - _sentinel: Used to prevent positional parameters. Internal, do not use. - strategy: Distribution strategy on which to run low level training loop. - model_fn: Function that returns a tuple (model, sub_model). Caller of this - function should add optimizer to the `model` via calling - `model.compile()` API or manually setting `model.optimizer` attribute. - Second element of the returned tuple(sub_model) is an optional sub model - to be used for initial checkpoint -- if provided. - loss_fn: Function with signature func(labels, logits) and returns a loss - tensor. - scale_loss: Whether to divide the raw loss by number of replicas before - gradients calculation. - model_dir: Model directory used during training for restoring/saving model - weights. - train_input_fn: Function that returns a tf.data.Dataset used for training. - steps_per_epoch: Number of steps to run per epoch. At the end of each - epoch, model checkpoint will be saved and evaluation will be conducted - if evaluation dataset is provided. - num_eval_per_epoch: Number of evaluations per epoch. - steps_per_loop: Number of steps per graph-mode loop. In order to reduce - communication in eager context, training logs are printed every - steps_per_loop. - epochs: Number of epochs to train. - eval_input_fn: Function that returns evaluation dataset. If none, - evaluation is skipped. - eval_steps: Number of steps to run evaluation. Required if `eval_input_fn` - is not none. - metric_fn: A metrics function that returns a Keras Metric object to record - evaluation result using evaluation dataset or with training dataset - after every epoch. - init_checkpoint: Optional checkpoint to load to `sub_model` returned by - `model_fn`. - custom_callbacks: A list of Keras Callbacks objects to run during - training. More specifically, `on_train_begin(), on_train_end(), - on_batch_begin()`, `on_batch_end()`, `on_epoch_begin()`, - `on_epoch_end()` methods are invoked during training. - Note that some metrics may be missing from `logs`. - run_eagerly: Whether to run model training in pure eager execution. This - should be disable for TPUStrategy. - sub_model_export_name: If not None, will export `sub_model` returned by - `model_fn` into checkpoint files. The name of intermediate checkpoint - file is {sub_model_export_name}_step_{step}.ckpt and the last - checkpint's name is {sub_model_export_name}.ckpt; if None, `sub_model` - will not be exported as checkpoint. - explicit_allreduce: Whether to explicitly perform gradient allreduce, - instead of relying on implicit allreduce in optimizer.apply_gradients(). - default is False. For now, if training using FP16 mixed precision, - explicit allreduce will aggregate gradients in FP16 format. For TPU and - GPU training using FP32, explicit allreduce will aggregate gradients in - FP32 format. - pre_allreduce_callbacks: A list of callback functions that takes gradients - and model variables pairs as input, manipulate them, and returns a new - gradients and model variables paris. The callback functions will be - invoked in the list order and before gradients are allreduced. With - mixed precision training, the pre_allreduce_allbacks will be applied on - scaled_gradients. Default is no callbacks. Only used when - explicit_allreduce=True. - post_allreduce_callbacks: A list of callback functions that takes - gradients and model variables pairs as input, manipulate them, and - returns a new gradients and model variables paris. The callback - functions will be invoked in the list order and right before gradients - are applied to variables for updates. Default is no callbacks. Only used - when explicit_allreduce=True. - train_summary_interval: Step interval for training summaries. If the value - is a negative number, then training summaries are not enabled. - - Returns: - Trained model. - - Raises: - ValueError: (1) When model returned by `model_fn` does not have optimizer - attribute or when required parameters are set to none. (2) eval args are - not specified correctly. (3) metric_fn must be a callable if specified. - (4) sub_model_checkpoint_name is specified, but `sub_model` returned - by `model_fn` is None. - """ - - if _sentinel is not None: - raise ValueError('only call `run_customized_training_loop()` ' - 'with named arguments.') - - required_arguments = [ - strategy, model_fn, loss_fn, model_dir, steps_per_epoch, train_input_fn - ] - - steps_between_evals = int(steps_per_epoch / num_eval_per_epoch) - if [arg for arg in required_arguments if arg is None]: - raise ValueError('`strategy`, `model_fn`, `loss_fn`, `model_dir`, ' - '`steps_per_epoch` and `train_input_fn` are required ' - 'parameters.') - if not steps_per_loop: - if tf.config.list_logical_devices('TPU'): - # One can't fully utilize a TPU with steps_per_loop=1, so in this case - # default users to a more useful value. - steps_per_loop = min(1000, steps_between_evals) - else: - steps_per_loop = 1 - logging.info('steps_per_loop not specified. Using steps_per_loop=%d', - steps_per_loop) - if steps_per_loop > steps_between_evals: - logging.warning( - 'steps_per_loop: %d is specified to be greater than ' - ' steps_between_evals: %d, we will use steps_between_evals as' - ' steps_per_loop.', steps_per_loop, steps_between_evals) - steps_per_loop = steps_between_evals - assert tf.executing_eagerly() - - if run_eagerly: - if isinstance(strategy, tf.distribute.experimental.TPUStrategy): - raise ValueError( - 'TPUStrategy should not run eagerly as it heavily relies on graph' - ' optimization for the distributed system.') - - if eval_input_fn and eval_steps is None: - raise ValueError( - '`eval_step` is required when `eval_input_fn ` is not none.') - if metric_fn and not callable(metric_fn): - raise ValueError( - 'if `metric_fn` is specified, metric_fn must be a callable.') - - total_training_steps = steps_per_epoch * epochs - train_iterator = _get_input_iterator(train_input_fn, strategy) - eval_loss_metric = tf.keras.metrics.Mean('training_loss', dtype=tf.float32) - - with distribution_utils.get_strategy_scope(strategy): - # To correctly place the model weights on accelerators, - # model and optimizer should be created in scope. - model, sub_model = model_fn() - if not hasattr(model, 'optimizer'): - raise ValueError('User should set optimizer attribute to model ' - 'inside `model_fn`.') - if sub_model_export_name and sub_model is None: - raise ValueError('sub_model_export_name is specified as %s, but ' - 'sub_model is None.' % sub_model_export_name) - - callback_list = tf.keras.callbacks.CallbackList( - callbacks=custom_callbacks, model=model) - - optimizer = model.optimizer - - if init_checkpoint: - logging.info( - 'Checkpoint file %s found and restoring from ' - 'initial checkpoint for core model.', init_checkpoint) - checkpoint = tf.train.Checkpoint(model=sub_model) - checkpoint.restore(init_checkpoint).assert_existing_objects_matched() - logging.info('Loading from checkpoint file completed') - - train_loss_metric = tf.keras.metrics.Mean('training_loss', dtype=tf.float32) - eval_metrics = [metric_fn()] if metric_fn else [] - # If evaluation is required, make a copy of metric as it will be used by - # both train and evaluation. - train_metrics = [ - metric.__class__.from_config(metric.get_config()) - for metric in eval_metrics - ] - - # Create summary writers - if _should_export_summary(strategy): - summary_dir = os.path.join(model_dir, 'summaries') - else: - # In multi worker training we need every worker to write summary, because - # variables can trigger synchronization on read and synchronization needs - # all workers to participate. - summary_dir = tempfile.mkdtemp() - eval_summary_writer = tf.summary.create_file_writer( - os.path.join(summary_dir, 'eval')) - last_summary_step = 0 - if steps_per_loop >= _MIN_SUMMARY_STEPS and train_summary_interval >= 0: - # Only writes summary when the stats are collected sufficiently over - # enough steps. - train_summary_writer = tf.summary.create_file_writer( - os.path.join(summary_dir, 'train')) - else: - train_summary_writer = tf.summary.create_noop_writer() - - # Collects training variables. - training_vars = model.trainable_variables - - def _replicated_step(inputs): - """Replicated training step.""" - - inputs, labels = inputs - with tf.GradientTape() as tape: - model_outputs = model(inputs, training=True) - loss = loss_fn(labels, model_outputs) - # Raw loss is used for reporting in metrics/logs. - raw_loss = loss - if scale_loss: - # Scales down the loss for gradients to be invariant from replicas. - loss = loss / strategy.num_replicas_in_sync - - if explicit_allreduce: - grad_utils.minimize_using_explicit_allreduce(tape, optimizer, loss, - training_vars, - pre_allreduce_callbacks, - post_allreduce_callbacks) - else: - if isinstance(optimizer, - tf.keras.mixed_precision.experimental.LossScaleOptimizer): - with tape: - scaled_loss = optimizer.get_scaled_loss(loss) - scaled_grads = tape.gradient(scaled_loss, training_vars) - grads = optimizer.get_unscaled_gradients(scaled_grads) - else: - grads = tape.gradient(loss, training_vars) - optimizer.apply_gradients(zip(grads, training_vars)) - # For reporting, the metric takes the mean of losses. - train_loss_metric.update_state(raw_loss) - for metric in train_metrics: - metric.update_state(labels, model_outputs) - - @tf.function - def train_steps(iterator, steps): - """Performs distributed training steps in a loop. - - Args: - iterator: the distributed iterator of training datasets. - steps: an tf.int32 integer tensor to specify number of steps to run - inside host training loop. - - Raises: - ValueError: Any of the arguments or tensor shapes are invalid. - """ - if not isinstance(steps, tf.Tensor): - raise ValueError('steps should be an Tensor. Python object may cause ' - 'retracing.') - - for _ in tf.range(steps): - strategy.run(_replicated_step, args=(next(iterator),)) - - def train_single_step(iterator): - """Performs a distributed training step. - - Args: - iterator: the distributed iterator of training datasets. - - Raises: - ValueError: Any of the arguments or tensor shapes are invalid. - """ - strategy.run(_replicated_step, args=(next(iterator),)) - - def test_step(iterator): - """Calculates evaluation metrics on distributed devices.""" - - def _test_step_fn(inputs): - """Replicated accuracy calculation.""" - - inputs, labels = inputs - model_outputs = model(inputs, training=False) - for metric in eval_metrics: - metric.update_state(labels, model_outputs) - return model_outputs, labels - - outputs, labels = strategy.run(_test_step_fn, args=(next(iterator),)) - outputs = tf.nest.map_structure(strategy.experimental_local_results, - outputs) - labels = tf.nest.map_structure(strategy.experimental_local_results, - labels) - return outputs, labels - - if not run_eagerly: - train_single_step = tf.function(train_single_step) - test_step = tf.function(test_step) - - def _run_evaluation(current_training_step, test_iterator): - """Runs validation steps and aggregate metrics. - - Args: - current_training_step: tf.int32 tensor containing the current step. - test_iterator: distributed iterator of test datasets. - - Returns: - A dict of metic names and values. - """ - # The last batch of the evaluation is often smaller than previous ones. - # Moreover, in some distributed pieces it might even be empty. Therefore, - # different from the way training_loss is calculated, it is needed to - # gather all the logits and labels here to calculate the evaluation loss - # outside. - loss_list, loss_weights = list(), list() - for _ in range(eval_steps): - outputs, labels = test_step(test_iterator) - for cur_logits, cur_labels in zip(outputs, labels): - # This is to handle cases when cur_labels is not a single tensor, - # but a dict of tensors. - cur_weight = tf.shape(tf.nest.flatten(cur_labels)[0])[0] - if cur_weight != 0: - loss_list.append(loss_fn(cur_labels, cur_logits).numpy()) - loss_weights.append(cur_weight) - # The sample_weights are the actual number of examples in each batch, - # a summation of numbers of examples in each replica if using - # distributed training. - eval_loss_metric.update_state(loss_list, sample_weight=loss_weights) - - logs = {} - with eval_summary_writer.as_default(): - for metric in [eval_loss_metric] + eval_metrics + model.metrics: - metric_value = _float_metric_value(metric) - logs[metric.name] = metric_value - logging.info('Step: [%d] Validation %s = %f', current_training_step, - metric.name, metric_value) - tf.summary.scalar( - metric.name, metric_value, step=current_training_step) - eval_summary_writer.flush() - - return logs - - # Training loop starts here. - checkpoint = tf.train.Checkpoint( - model=model, optimizer=optimizer, global_step=optimizer.iterations) - sub_model_checkpoint = tf.train.Checkpoint( - model=sub_model, - global_step=optimizer.iterations) if sub_model_export_name else None - - latest_checkpoint_file = tf.train.latest_checkpoint(model_dir) - if latest_checkpoint_file: - logging.info('Checkpoint file %s found and restoring from ' - 'checkpoint', latest_checkpoint_file) - checkpoint.restore(latest_checkpoint_file) - logging.info('Loading from checkpoint file completed') - - current_step = optimizer.iterations.numpy() - checkpoint_name = 'ctl_step_{step}.ckpt' - - logs = {} - callback_list.on_train_begin() - while current_step < total_training_steps and not model.stop_training: - if current_step % steps_per_epoch == 0: - callback_list.on_epoch_begin( - int(current_step / steps_per_epoch) + 1) - - # Training loss/metric are taking average over steps inside micro - # training loop. We reset the their values before each round. - train_loss_metric.reset_states() - for metric in train_metrics + model.metrics: - metric.reset_states() - - callback_list.on_batch_begin(current_step) - # Runs several steps in the host while loop. - steps = steps_to_run(current_step, steps_between_evals, steps_per_loop) - - if tf.config.list_physical_devices('GPU'): - # TODO(zongweiz): merge with train_steps once tf.while_loop - # GPU performance bugs are fixed. - for _ in range(steps): - train_single_step(train_iterator) - else: - # Converts steps to a Tensor to avoid tf.function retracing. - train_steps(train_iterator, tf.convert_to_tensor(steps, dtype=tf.int32)) - train_loss = _float_metric_value(train_loss_metric) - current_step += steps - - # Updates training logging. - training_status = 'Train Step: %d/%d / loss = %s' % ( - current_step, total_training_steps, train_loss) - - if current_step >= last_summary_step + train_summary_interval: - summary_writer = train_summary_writer - last_summary_step = current_step - else: - summary_writer = tf.summary.create_noop_writer() - - with summary_writer.as_default(): - if callable(optimizer.learning_rate): - tf.summary.scalar( - 'learning_rate', - optimizer.learning_rate(current_step), - step=current_step) - tf.summary.scalar(train_loss_metric.name, train_loss, step=current_step) - for metric in train_metrics + model.metrics: - metric_value = _float_metric_value(metric) - training_status += ' %s = %f' % (metric.name, metric_value) - tf.summary.scalar(metric.name, metric_value, step=current_step) - summary_writer.flush() - logging.info(training_status) - - # If no need for evaluation, we only call on_batch_end with train_loss, - # this is to ensure we get granular global_step/sec on Tensorboard. - if current_step % steps_between_evals: - callback_list.on_batch_end(current_step - 1, {'loss': train_loss}) - else: - # Save a submodel with the step in the file name after each epoch. - if sub_model_export_name: - _save_checkpoint( - strategy, sub_model_checkpoint, model_dir, - '%s_step_%d.ckpt' % (sub_model_export_name, current_step)) - - # Save model checkpoints and run validation steps after each epoch - # (with the exception of the final epoch which is handled after the - # training loop). - if current_step < total_training_steps: - _save_checkpoint(strategy, checkpoint, model_dir, - checkpoint_name.format(step=current_step)) - if eval_input_fn: - logging.info('Running evaluation after step: %s.', current_step) - logs = _run_evaluation(current_step, - _get_input_iterator(eval_input_fn, strategy)) - # Re-initialize evaluation metric. - eval_loss_metric.reset_states() - for metric in eval_metrics + model.metrics: - metric.reset_states() - # We add train_loss here rather than call on_batch_end twice to make - # sure that no duplicated values are generated. - logs['loss'] = train_loss - callback_list.on_batch_end(current_step - 1, logs) - - # Calls on_epoch_end after each real epoch ends to prevent mis-calculation - # of training steps. - if current_step % steps_per_epoch == 0: - callback_list.on_epoch_end(int(current_step / steps_per_epoch), logs) - - if sub_model_export_name: - _save_checkpoint(strategy, sub_model_checkpoint, model_dir, - '%s.ckpt' % sub_model_export_name) - - _save_checkpoint(strategy, checkpoint, model_dir, - checkpoint_name.format(step=current_step)) - if eval_input_fn: - logging.info('Running final evaluation after training is complete.') - logs = _run_evaluation(current_step, - _get_input_iterator(eval_input_fn, strategy)) - callback_list.on_epoch_end(int(current_step / steps_per_epoch), logs) - training_summary = { - 'total_training_steps': total_training_steps, - 'train_loss': _float_metric_value(train_loss_metric), - } - for metric in model.metrics: - training_summary[metric.name] = _float_metric_value(metric) - if eval_metrics: - # TODO(hongkuny): Cleans up summary reporting in text. - training_summary['last_train_metrics'] = _float_metric_value( - train_metrics[0]) - training_summary['eval_metrics'] = _float_metric_value(eval_metrics[0]) - - write_txt_summary(training_summary, summary_dir) - - if not _should_export_summary(strategy): - tf.io.gfile.rmtree(summary_dir) - - callback_list.on_train_end() - - return model diff --git a/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/layers.py b/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/layers.py deleted file mode 100644 index be4c7a47e0871182d82310e07e5739c2fc9f8744..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/layers.py +++ /dev/null @@ -1,397 +0,0 @@ -# Copyright 2017 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Layers for VatxtModel.""" -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -# Dependency imports - -from six.moves import xrange -import tensorflow as tf -K = tf.keras - - -def cl_logits_subgraph(layer_sizes, input_size, num_classes, keep_prob=1.): - """Construct multiple ReLU layers with dropout and a linear layer.""" - subgraph = K.models.Sequential(name='cl_logits') - for i, layer_size in enumerate(layer_sizes): - if i == 0: - subgraph.add( - K.layers.Dense(layer_size, activation='relu', input_dim=input_size)) - else: - subgraph.add(K.layers.Dense(layer_size, activation='relu')) - - if keep_prob < 1.: - subgraph.add(K.layers.Dropout(1. - keep_prob)) - subgraph.add(K.layers.Dense(1 if num_classes == 2 else num_classes)) - return subgraph - - -class Embedding(K.layers.Layer): - """Embedding layer with frequency-based normalization and dropout.""" - - def __init__(self, - vocab_size, - embedding_dim, - normalize=False, - vocab_freqs=None, - keep_prob=1., - **kwargs): - self.vocab_size = vocab_size - self.embedding_dim = embedding_dim - self.normalized = normalize - self.keep_prob = keep_prob - - if normalize: - assert vocab_freqs is not None - self.vocab_freqs = tf.constant( - vocab_freqs, dtype=tf.float32, shape=(vocab_size, 1)) - - super(Embedding, self).__init__(**kwargs) - - def build(self, input_shape): - with tf.device('/cpu:0'): - self.var = self.add_weight( - shape=(self.vocab_size, self.embedding_dim), - initializer=tf.random_uniform_initializer(-1., 1.), - name='embedding', - dtype=tf.float32) - - if self.normalized: - self.var = self._normalize(self.var) - - super(Embedding, self).build(input_shape) - - def call(self, x): - embedded = tf.nn.embedding_lookup(self.var, x) - if self.keep_prob < 1.: - shape = embedded.get_shape().as_list() - - # Use same dropout masks at each timestep with specifying noise_shape. - # This slightly improves performance. - # Please see https://arxiv.org/abs/1512.05287 for the theoretical - # explanation. - embedded = tf.nn.dropout( - embedded, self.keep_prob, noise_shape=(shape[0], 1, shape[2])) - return embedded - - def _normalize(self, emb): - weights = self.vocab_freqs / tf.reduce_sum(self.vocab_freqs) - mean = tf.reduce_sum(weights * emb, 0, keep_dims=True) - var = tf.reduce_sum(weights * tf.pow(emb - mean, 2.), 0, keep_dims=True) - stddev = tf.sqrt(1e-6 + var) - return (emb - mean) / stddev - - -class LSTM(object): - """LSTM layer using dynamic_rnn. - - Exposes variables in `trainable_weights` property. - """ - - def __init__(self, cell_size, num_layers=1, keep_prob=1., name='LSTM'): - self.cell_size = cell_size - self.num_layers = num_layers - self.keep_prob = keep_prob - self.reuse = None - self.trainable_weights = None - self.name = name - - def __call__(self, x, initial_state, seq_length): - with tf.variable_scope(self.name, reuse=self.reuse) as vs: - cell = tf.contrib.rnn.MultiRNNCell([ - tf.contrib.rnn.BasicLSTMCell( - self.cell_size, - forget_bias=0.0, - reuse=tf.get_variable_scope().reuse) - for _ in xrange(self.num_layers) - ]) - - # shape(x) = (batch_size, num_timesteps, embedding_dim) - - lstm_out, next_state = tf.nn.dynamic_rnn( - cell, x, initial_state=initial_state, sequence_length=seq_length) - - # shape(lstm_out) = (batch_size, timesteps, cell_size) - - if self.keep_prob < 1.: - lstm_out = tf.nn.dropout(lstm_out, self.keep_prob) - - if self.reuse is None: - self.trainable_weights = vs.global_variables() - - self.reuse = True - - return lstm_out, next_state - - -class SoftmaxLoss(K.layers.Layer): - """Softmax xentropy loss with candidate sampling.""" - - def __init__(self, - vocab_size, - num_candidate_samples=-1, - vocab_freqs=None, - **kwargs): - self.vocab_size = vocab_size - self.num_candidate_samples = num_candidate_samples - self.vocab_freqs = vocab_freqs - super(SoftmaxLoss, self).__init__(**kwargs) - self.multiclass_dense_layer = K.layers.Dense(self.vocab_size) - - def build(self, input_shape): - input_shape = input_shape[0].as_list() - with tf.device('/cpu:0'): - self.lin_w = self.add_weight( - shape=(input_shape[-1], self.vocab_size), - name='lm_lin_w', - initializer=K.initializers.glorot_uniform()) - self.lin_b = self.add_weight( - shape=(self.vocab_size,), - name='lm_lin_b', - initializer=K.initializers.glorot_uniform()) - self.multiclass_dense_layer.build(input_shape) - - super(SoftmaxLoss, self).build(input_shape) - - def call(self, inputs): - x, labels, weights = inputs - if self.num_candidate_samples > -1: - assert self.vocab_freqs is not None - labels_reshaped = tf.reshape(labels, [-1]) - labels_reshaped = tf.expand_dims(labels_reshaped, -1) - sampled = tf.nn.fixed_unigram_candidate_sampler( - true_classes=labels_reshaped, - num_true=1, - num_sampled=self.num_candidate_samples, - unique=True, - range_max=self.vocab_size, - unigrams=self.vocab_freqs) - inputs_reshaped = tf.reshape(x, [-1, int(x.get_shape()[2])]) - - lm_loss = tf.nn.sampled_softmax_loss( - weights=tf.transpose(self.lin_w), - biases=self.lin_b, - labels=labels_reshaped, - inputs=inputs_reshaped, - num_sampled=self.num_candidate_samples, - num_classes=self.vocab_size, - sampled_values=sampled) - lm_loss = tf.reshape( - lm_loss, - [int(x.get_shape()[0]), int(x.get_shape()[1])]) - else: - logits = self.multiclass_dense_layer(x) - lm_loss = tf.nn.sparse_softmax_cross_entropy_with_logits( - logits=logits, labels=labels) - - lm_loss = tf.identity( - tf.reduce_sum(lm_loss * weights) / _num_labels(weights), - name='lm_xentropy_loss') - return lm_loss - - -def classification_loss(logits, labels, weights): - """Computes cross entropy loss between logits and labels. - - Args: - logits: 2-D [timesteps*batch_size, m] float tensor, where m=1 if - num_classes=2, otherwise m=num_classes. - labels: 1-D [timesteps*batch_size] integer tensor. - weights: 1-D [timesteps*batch_size] float tensor. - - Returns: - Loss scalar of type float. - """ - inner_dim = logits.get_shape().as_list()[-1] - with tf.name_scope('classifier_loss'): - # Logistic loss - if inner_dim == 1: - loss = tf.nn.sigmoid_cross_entropy_with_logits( - logits=tf.squeeze(logits, -1), labels=tf.cast(labels, tf.float32)) - # Softmax loss - else: - loss = tf.nn.sparse_softmax_cross_entropy_with_logits( - logits=logits, labels=labels) - - num_lab = _num_labels(weights) - tf.summary.scalar('num_labels', num_lab) - return tf.identity( - tf.reduce_sum(weights * loss) / num_lab, name='classification_xentropy') - - -def accuracy(logits, targets, weights): - """Computes prediction accuracy. - - Args: - logits: 2-D classifier logits [timesteps*batch_size, num_classes] - targets: 1-D [timesteps*batch_size] integer tensor. - weights: 1-D [timesteps*batch_size] float tensor. - - Returns: - Accuracy: float scalar. - """ - with tf.name_scope('accuracy'): - eq = tf.cast(tf.equal(predictions(logits), targets), tf.float32) - return tf.identity( - tf.reduce_sum(weights * eq) / _num_labels(weights), name='accuracy') - - -def predictions(logits): - """Class prediction from logits.""" - inner_dim = logits.get_shape().as_list()[-1] - with tf.name_scope('predictions'): - # For binary classification - if inner_dim == 1: - pred = tf.cast(tf.greater(tf.squeeze(logits, -1), 0.), tf.int64) - # For multi-class classification - else: - pred = tf.argmax(logits, 2) - return pred - - -def _num_labels(weights): - """Number of 1's in weights. Returns 1. if 0.""" - num_labels = tf.reduce_sum(weights) - num_labels = tf.where(tf.equal(num_labels, 0.), 1., num_labels) - return num_labels - - -def optimize(loss, - global_step, - max_grad_norm, - lr, - lr_decay, - sync_replicas=False, - replicas_to_aggregate=1, - task_id=0): - """Builds optimization graph. - - * Creates an optimizer, and optionally wraps with SyncReplicasOptimizer - * Computes, clips, and applies gradients - * Maintains moving averages for all trainable variables - * Summarizes variables and gradients - - Args: - loss: scalar loss to minimize. - global_step: integer scalar Variable. - max_grad_norm: float scalar. Grads will be clipped to this value. - lr: float scalar, learning rate. - lr_decay: float scalar, learning rate decay rate. - sync_replicas: bool, whether to use SyncReplicasOptimizer. - replicas_to_aggregate: int, number of replicas to aggregate when using - SyncReplicasOptimizer. - task_id: int, id of the current task; used to ensure proper initialization - of SyncReplicasOptimizer. - - Returns: - train_op - """ - with tf.name_scope('optimization'): - # Compute gradients. - tvars = tf.trainable_variables() - grads = tf.gradients( - loss, - tvars, - aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N) - - # Clip non-embedding grads - non_embedding_grads_and_vars = [(g, v) for (g, v) in zip(grads, tvars) - if 'embedding' not in v.op.name] - embedding_grads_and_vars = [(g, v) for (g, v) in zip(grads, tvars) - if 'embedding' in v.op.name] - - ne_grads, ne_vars = zip(*non_embedding_grads_and_vars) - ne_grads, _ = tf.clip_by_global_norm(ne_grads, max_grad_norm) - non_embedding_grads_and_vars = zip(ne_grads, ne_vars) - - grads_and_vars = embedding_grads_and_vars + list(non_embedding_grads_and_vars) - - # Summarize - _summarize_vars_and_grads(grads_and_vars) - - # Decaying learning rate - lr = tf.train.exponential_decay( - lr, global_step, 1, lr_decay, staircase=True) - tf.summary.scalar('learning_rate', lr) - opt = tf.train.AdamOptimizer(lr) - - # Track the moving averages of all trainable variables. - variable_averages = tf.train.ExponentialMovingAverage(0.999, global_step) - - # Apply gradients - if sync_replicas: - opt = tf.train.SyncReplicasOptimizer( - opt, - replicas_to_aggregate, - variable_averages=variable_averages, - variables_to_average=tvars, - total_num_replicas=replicas_to_aggregate) - apply_gradient_op = opt.apply_gradients( - grads_and_vars, global_step=global_step) - with tf.control_dependencies([apply_gradient_op]): - train_op = tf.no_op(name='train_op') - - # Initialization ops - tf.add_to_collection(tf.GraphKeys.QUEUE_RUNNERS, - opt.get_chief_queue_runner()) - if task_id == 0: # Chief task - local_init_op = opt.chief_init_op - tf.add_to_collection('chief_init_op', opt.get_init_tokens_op()) - else: - local_init_op = opt.local_step_init_op - tf.add_to_collection('local_init_op', local_init_op) - tf.add_to_collection('ready_for_local_init_op', - opt.ready_for_local_init_op) - else: - # Non-sync optimizer - apply_gradient_op = opt.apply_gradients(grads_and_vars, global_step) - with tf.control_dependencies([apply_gradient_op]): - train_op = variable_averages.apply(tvars) - - return train_op - - -def _summarize_vars_and_grads(grads_and_vars): - tf.logging.info('Trainable variables:') - tf.logging.info('-' * 60) - for grad, var in grads_and_vars: - tf.logging.info(var) - - def tag(name, v=var): - return v.op.name + '_' + name - - # Variable summary - mean = tf.reduce_mean(var) - tf.summary.scalar(tag('mean'), mean) - with tf.name_scope(tag('stddev')): - stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean))) - tf.summary.scalar(tag('stddev'), stddev) - tf.summary.scalar(tag('max'), tf.reduce_max(var)) - tf.summary.scalar(tag('min'), tf.reduce_min(var)) - tf.summary.histogram(tag('histogram'), var) - - # Gradient summary - if grad is not None: - if isinstance(grad, tf.IndexedSlices): - grad_values = grad.values - else: - grad_values = grad - - tf.summary.histogram(tag('gradient'), grad_values) - tf.summary.scalar(tag('gradient_norm'), tf.global_norm([grad_values])) - else: - tf.logging.info('Var %s has no gradient', var.op.name) diff --git a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/metrics.py b/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/metrics.py deleted file mode 100644 index 9e2a6a7579812583dc60546f97976f05befe07ff..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/metrics.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright 2017 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -"""Quality metrics for the model.""" - -import tensorflow as tf - - -def char_accuracy(predictions, targets, rej_char, streaming=False): - """Computes character level accuracy. - - Both predictions and targets should have the same shape - [batch_size x seq_length]. - - Args: - predictions: predicted characters ids. - targets: ground truth character ids. - rej_char: the character id used to mark an empty element (end of sequence). - streaming: if True, uses the streaming mean from the slim.metric module. - - Returns: - a update_ops for execution and value tensor whose value on evaluation - returns the total character accuracy. - """ - with tf.variable_scope('CharAccuracy'): - predictions.get_shape().assert_is_compatible_with(targets.get_shape()) - - targets = tf.to_int32(targets) - const_rej_char = tf.constant(rej_char, shape=targets.get_shape()) - weights = tf.to_float(tf.not_equal(targets, const_rej_char)) - correct_chars = tf.to_float(tf.equal(predictions, targets)) - accuracy_per_example = tf.div( - tf.reduce_sum(tf.multiply(correct_chars, weights), 1), - tf.reduce_sum(weights, 1)) - if streaming: - return tf.contrib.metrics.streaming_mean(accuracy_per_example) - else: - return tf.reduce_mean(accuracy_per_example) - - -def sequence_accuracy(predictions, targets, rej_char, streaming=False): - """Computes sequence level accuracy. - - Both input tensors should have the same shape: [batch_size x seq_length]. - - Args: - predictions: predicted character classes. - targets: ground truth character classes. - rej_char: the character id used to mark empty element (end of sequence). - streaming: if True, uses the streaming mean from the slim.metric module. - - Returns: - a update_ops for execution and value tensor whose value on evaluation - returns the total sequence accuracy. - """ - - with tf.variable_scope('SequenceAccuracy'): - predictions.get_shape().assert_is_compatible_with(targets.get_shape()) - - targets = tf.to_int32(targets) - const_rej_char = tf.constant( - rej_char, shape=targets.get_shape(), dtype=tf.int32) - include_mask = tf.not_equal(targets, const_rej_char) - include_predictions = tf.to_int32( - tf.where(include_mask, predictions, - tf.zeros_like(predictions) + rej_char)) - correct_chars = tf.to_float(tf.equal(include_predictions, targets)) - correct_chars_counts = tf.cast( - tf.reduce_sum(correct_chars, reduction_indices=[1]), dtype=tf.int32) - target_length = targets.get_shape().dims[1].value - target_chars_counts = tf.constant( - target_length, shape=correct_chars_counts.get_shape()) - accuracy_per_example = tf.to_float( - tf.equal(correct_chars_counts, target_chars_counts)) - if streaming: - return tf.contrib.metrics.streaming_mean(accuracy_per_example) - else: - return tf.reduce_mean(accuracy_per_example) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_multi_corpus_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_multi_corpus_dataset.py deleted file mode 100644 index 5a79f4b680e5bc2c7374ec6dd8ea525c47b40985..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_multi_corpus_dataset.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest -from collections import OrderedDict - -import torch -from fairseq.data import LanguagePairDataset, TokenBlockDataset -from fairseq.data.multi_corpus_dataset import MultiCorpusDataset -from tests.test_train import mock_dict - - -class TestMultiCorpusDataset(unittest.TestCase): - def setUp(self): - d = mock_dict() - tokens_1 = torch.LongTensor([i for i in range(1, 5000, 2)]).view(1, -1) - tokens_ds1 = TokenBlockDataset( - tokens_1, - sizes=[tokens_1.size(-1)], - block_size=1, - pad=0, - eos=1, - include_targets=False, - ) - self.dataset_1 = LanguagePairDataset( - tokens_ds1, tokens_ds1.sizes, d, shuffle=False - ) - tokens_2 = torch.LongTensor([i for i in range(0, 5000, 2)]).view(1, -1) - tokens_ds2 = TokenBlockDataset( - tokens_2, - sizes=[tokens_2.size(-1)], - block_size=1, - pad=0, - eos=1, - include_targets=False, - ) - self.dataset_2 = LanguagePairDataset( - tokens_ds2, tokens_ds2.sizes, d, shuffle=False - ) - - def _test_sample_helper( - self, - distribution, - ): - m = MultiCorpusDataset( - OrderedDict({0: self.dataset_1, 1: self.dataset_2}), - distribution=distribution, - seed=0, - sort_indices=True, - ) - m.set_epoch(1) - indices = m.ordered_indices() - count_sample_from_first_dataset = 0 - items = set() - for i in indices: - item = m[i]["source"].item() - if item % 2 == 1: - count_sample_from_first_dataset += 1 - - items.add(item) - sample_from_first_ds_percentage = ( - 1.0 * count_sample_from_first_dataset / len(indices) - ) - self.assertLess( - abs(sample_from_first_ds_percentage - distribution[0]), - 0.01, - ) - self.assertEqual( - len(items), - int(min(len(self.dataset_1), len(indices) * distribution[0]) - + min(len(self.dataset_1), len(indices) * distribution[1])) - ) - print(distribution) - - def test_multi_corpus_dataset(self): - for distribution in [[0.5, 0.5], [0.1, 0.9], [0.9, 0.1]]: - self._test_sample_helper(distribution=distribution) diff --git a/spaces/OFA-Sys/OFA-vqa/utils/eval_utils.py b/spaces/OFA-Sys/OFA-vqa/utils/eval_utils.py deleted file mode 100644 index f84008d24aedf755f0c3b8c0888dcc8ca1dabbf4..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/utils/eval_utils.py +++ /dev/null @@ -1,136 +0,0 @@ -import string -import math - -import torch - -from data import data_utils - - -def get_symbols_to_strip_from_output(generator): - if hasattr(generator, "symbols_to_strip_from_output"): - return generator.symbols_to_strip_from_output - else: - return {generator.bos, generator.eos} - - -def decode_fn(x, tgt_dict, bpe, generator, tokenizer=None): - x = tgt_dict.string(x.int().cpu(), extra_symbols_to_ignore=get_symbols_to_strip_from_output(generator)) - if bpe is not None: - x = bpe.decode(x) - if tokenizer is not None: - x = tokenizer.decode(x) - return x - - -def eval_caption(task, generator, models, sample): - transtab = str.maketrans({key: None for key in string.punctuation}) - hypos = task.inference_step(generator, models, sample) - results = [] - for i, sample_id in enumerate(sample["id"].tolist()): - detok_hypo_str = decode_fn(hypos[i][0]["tokens"], task.tgt_dict, task.bpe, generator) - results.append({"image_id": str(sample_id), "caption": detok_hypo_str.translate(transtab).strip()}) - return results, None - - -def eval_vqa_gen(task, generator, models, sample): - encoder_out = models[0].encoder( - sample["net_input"]["src_tokens"], - src_lengths=sample["net_input"]["src_lengths"], - patch_images=sample["net_input"]["patch_images"], - patch_masks=sample["net_input"]["patch_masks"] - ) - device = sample["net_input"]["src_tokens"].device - eos_item = torch.tensor([task.src_dict.eos()]) - pad = task.src_dict.pad() - valid_result = [] - for valid_answers, valid_constraint_masks in zip(task.valid_answers_list, task.valid_constraint_masks_list): - valid_size = len(valid_answers) - valid_tgt_items = [ - torch.cat([torch.tensor(decoder_prompt[1:]), valid_answer, eos_item]) - for decoder_prompt in sample["decoder_prompts"] for valid_answer in valid_answers - ] - valid_prev_items = [ - torch.cat([torch.tensor(decoder_prompt), valid_answer]) - for decoder_prompt in sample["decoder_prompts"] for valid_answer in valid_answers - ] - valid_constraint_mask_items = [ - torch.cat( - [torch.zeros(len(decoder_prompt) - 1, valid_constraint_mask.size(1)).bool(), valid_constraint_mask], - dim=0 - ) - for decoder_prompt in sample["decoder_prompts"] for valid_constraint_mask in valid_constraint_masks - ] - valid_tgt = data_utils.collate_tokens(valid_tgt_items, pad_idx=pad).to(device) - valid_prev_output = data_utils.collate_tokens(valid_prev_items, pad_idx=pad).to(device) - valid_constraint_masks = data_utils.collate_tokens(valid_constraint_mask_items, pad_idx=pad).to(device) - - new_encoder_out = {} - new_encoder_out["encoder_out"] = [ - encoder_out["encoder_out"][0].repeat_interleave(valid_size, dim=1) - ] - new_encoder_out["encoder_padding_mask"] = [ - encoder_out["encoder_padding_mask"][0].repeat_interleave(valid_size, dim=0) - ] - new_encoder_out["position_embeddings"] = [ - encoder_out["position_embeddings"][0].repeat_interleave(valid_size, dim=0) - ] - - decoder_out = models[0].decoder(valid_prev_output, encoder_out=new_encoder_out) - decoder_out[0].masked_fill_(~valid_constraint_masks, -math.inf) - lprobs = models[0].get_normalized_probs(decoder_out, log_probs=True) - scores = lprobs.gather(dim=-1, index=valid_tgt.unsqueeze(-1)).squeeze(-1) - scores = scores.masked_fill(valid_tgt.eq(task.tgt_dict.pad()), 0) - scores = scores.masked_fill((~valid_constraint_masks).all(2), 0) - scores = scores.sum(1) - scores = scores.view(-1, valid_size) - valid_result.append(scores) - valid_result = torch.cat(valid_result, dim=-1) - predicts = valid_result.argmax(1).tolist() - hyps = [task.index2ans[predict_index] for predict_index in predicts] - results = [{"question_id": int(id), "answer": hyp} for id, hyp in zip(sample["id"].tolist(), hyps)] - scores = [ref_dict.get(hyp, 0) for ref_dict, hyp in zip(sample['ref_dict'], hyps)] - return results, scores - - -def eval_refcoco(task, generator, models, sample): - def _calculate_ap_score(hyps, refs, thresh=0.5): - interacts = torch.cat( - [torch.where(hyps[:, :2] < refs[:, :2], refs[:, :2], hyps[:, :2]), - torch.where(hyps[:, 2:] < refs[:, 2:], hyps[:, 2:], refs[:, 2:])], - dim=1 - ) - area_predictions = (hyps[:, 2] - hyps[:, 0]) * (hyps[:, 3] - hyps[:, 1]) - area_targets = (refs[:, 2] - refs[:, 0]) * (refs[:, 3] - refs[:, 1]) - interacts_w = interacts[:, 2] - interacts[:, 0] - interacts_h = interacts[:, 3] - interacts[:, 1] - area_interacts = interacts_w * interacts_h - ious = area_interacts / (area_predictions + area_targets - area_interacts + 1e-6) - return ((ious >= thresh) & (interacts_w > 0) & (interacts_h > 0)).float() - - gen_out = task.inference_step(generator, models, sample) - hyps = [] - for i in range(len(gen_out)): - hyps.append(gen_out[i][0]["tokens"][:-1] - len(task.src_dict) + task.cfg.num_bins) - hyps = torch.stack(hyps, dim=0) - hyps = hyps / (task.cfg.num_bins - 1) * task.cfg.max_image_size - hyps[:, ::2] /= sample['w_resize_ratios'].unsqueeze(1) - hyps[:, 1::2] /= sample['h_resize_ratios'].unsqueeze(1) - - results = [ - {"uniq_id": sample_id, - "box": [hyps[i][0].item(), hyps[i][1].item(), hyps[i][2].item(), hyps[i][3].item()]} - for i, sample_id in enumerate(sample["id"].tolist()) - ] - scores = _calculate_ap_score(hyps, sample['region_coords'].float()) - return results, scores - - -def eval_step(task, generator, models, sample): - if task.cfg._name == 'caption': - return eval_caption(task, generator, models, sample) - elif task.cfg._name == 'vqa_gen': - return eval_vqa_gen(task, generator, models, sample) - elif task.cfg._name == 'refcoco': - return eval_refcoco(task, generator, models, sample) - else: - raise NotImplementedError diff --git a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/llmriddles/questions/executor.py b/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/llmriddles/questions/executor.py deleted file mode 100644 index 61dafa769808626ef0f179fed4f6bf45979e8252..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/llmriddles/questions/executor.py +++ /dev/null @@ -1,35 +0,0 @@ -from typing import Tuple - -from .question import Question -from ..llms import get_llm_fn - - -class QuestionExecutor: - def __init__(self, question: Question, lang: str = 'cn', llm: str = 'chatgpt', llm_cfgs=None): - self.question = question - self.lang = lang - self.llm = llm - self.llm_cfgs = dict(llm_cfgs or {}) - - @property - def question_text(self): - return self.question.texts[self.lang] - - @property - def question_name(self): - return self.question.names[self.lang] - - def check(self, qs_text: str) -> Tuple[str, bool, str]: - answer_text = get_llm_fn(self.llm)(qs_text, **self.llm_cfgs) - correct, explanation = self.check_answer(qs_text, answer_text) - return answer_text, correct, explanation - - def check_answer(self, user_text: str, answer_text: str) -> Tuple[bool, str]: - correct, explanation = self.question.checker(self.question_text, user_text, answer_text, self.lang) - if explanation is None: - if correct: - explanation = 'LLM的回答满足要求' if self.lang == 'cn' else 'Correct Answer From LLM' - else: - explanation = 'LLM的回答不满足要求' if self.lang == 'cn' else 'Wrong Answer From LLM' - - return correct, explanation diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/box_head.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/box_head.py deleted file mode 100644 index 5d0370b0400d9268f13c905e4096a84ce42e9bfd..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/box_head.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from typing import List -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ShapeSpec, get_norm -from detectron2.utils.registry import Registry - -__all__ = ["FastRCNNConvFCHead", "build_box_head", "ROI_BOX_HEAD_REGISTRY"] - -ROI_BOX_HEAD_REGISTRY = Registry("ROI_BOX_HEAD") -ROI_BOX_HEAD_REGISTRY.__doc__ = """ -Registry for box heads, which make box predictions from per-region features. - -The registered object will be called with `obj(cfg, input_shape)`. -""" - - -# To get torchscript support, we make the head a subclass of `nn.Sequential`. -# Therefore, to add new layers in this head class, please make sure they are -# added in the order they will be used in forward(). -@ROI_BOX_HEAD_REGISTRY.register() -class FastRCNNConvFCHead(nn.Sequential): - """ - A head with several 3x3 conv layers (each followed by norm & relu) and then - several fc layers (each followed by relu). - """ - - @configurable - def __init__( - self, input_shape: ShapeSpec, *, conv_dims: List[int], fc_dims: List[int], conv_norm="" - ): - """ - NOTE: this interface is experimental. - - Args: - input_shape (ShapeSpec): shape of the input feature. - conv_dims (list[int]): the output dimensions of the conv layers - fc_dims (list[int]): the output dimensions of the fc layers - conv_norm (str or callable): normalization for the conv layers. - See :func:`detectron2.layers.get_norm` for supported types. - """ - super().__init__() - assert len(conv_dims) + len(fc_dims) > 0 - - self._output_size = (input_shape.channels, input_shape.height, input_shape.width) - - self.conv_norm_relus = [] - for k, conv_dim in enumerate(conv_dims): - conv = Conv2d( - self._output_size[0], - conv_dim, - kernel_size=3, - padding=1, - bias=not conv_norm, - norm=get_norm(conv_norm, conv_dim), - activation=nn.ReLU(), - ) - self.add_module("conv{}".format(k + 1), conv) - self.conv_norm_relus.append(conv) - self._output_size = (conv_dim, self._output_size[1], self._output_size[2]) - - self.fcs = [] - for k, fc_dim in enumerate(fc_dims): - if k == 0: - self.add_module("flatten", nn.Flatten()) - fc = nn.Linear(int(np.prod(self._output_size)), fc_dim) - self.add_module("fc{}".format(k + 1), fc) - self.add_module("fc_relu{}".format(k + 1), nn.ReLU()) - self.fcs.append(fc) - self._output_size = fc_dim - - for layer in self.conv_norm_relus: - weight_init.c2_msra_fill(layer) - for layer in self.fcs: - weight_init.c2_xavier_fill(layer) - - @classmethod - def from_config(cls, cfg, input_shape): - num_conv = cfg.MODEL.ROI_BOX_HEAD.NUM_CONV - conv_dim = cfg.MODEL.ROI_BOX_HEAD.CONV_DIM - num_fc = cfg.MODEL.ROI_BOX_HEAD.NUM_FC - fc_dim = cfg.MODEL.ROI_BOX_HEAD.FC_DIM - return { - "input_shape": input_shape, - "conv_dims": [conv_dim] * num_conv, - "fc_dims": [fc_dim] * num_fc, - "conv_norm": cfg.MODEL.ROI_BOX_HEAD.NORM, - } - - def forward(self, x): - for layer in self: - x = layer(x) - return x - - @property - @torch.jit.unused - def output_shape(self): - """ - Returns: - ShapeSpec: the output feature shape - """ - o = self._output_size - if isinstance(o, int): - return ShapeSpec(channels=o) - else: - return ShapeSpec(channels=o[0], height=o[1], width=o[2]) - - -def build_box_head(cfg, input_shape): - """ - Build a box head defined by `cfg.MODEL.ROI_BOX_HEAD.NAME`. - """ - name = cfg.MODEL.ROI_BOX_HEAD.NAME - return ROI_BOX_HEAD_REGISTRY.get(name)(cfg, input_shape) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/test_instantiate_config.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/test_instantiate_config.py deleted file mode 100644 index b76f71b9a206cb59006765803c96713cb990d22c..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/test_instantiate_config.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import os -import tempfile -import unittest -import yaml -from omegaconf import OmegaConf -from omegaconf import __version__ as oc_version -from dataclasses import dataclass - -from detectron2.config import instantiate, LazyCall as L -from detectron2.layers import ShapeSpec - -OC_VERSION = tuple(int(x) for x in oc_version.split(".")[:2]) - - -class TestClass: - def __init__(self, int_arg, list_arg=None, dict_arg=None, extra_arg=None): - self.int_arg = int_arg - self.list_arg = list_arg - self.dict_arg = dict_arg - self.extra_arg = extra_arg - - def __call__(self, call_arg): - return call_arg + self.int_arg - - -@dataclass -class TestDataClass: - x: int - y: str - - -@unittest.skipIf(OC_VERSION < (2, 1), "omegaconf version too old") -class TestConstruction(unittest.TestCase): - def test_basic_construct(self): - objconf = L(TestClass)( - int_arg=3, - list_arg=[10], - dict_arg={}, - extra_arg=L(TestClass)(int_arg=4, list_arg="${..list_arg}"), - ) - - obj = instantiate(objconf) - self.assertIsInstance(obj, TestClass) - self.assertEqual(obj.int_arg, 3) - self.assertEqual(obj.extra_arg.int_arg, 4) - self.assertEqual(obj.extra_arg.list_arg, obj.list_arg) - - objconf.extra_arg.list_arg = [5] - obj = instantiate(objconf) - self.assertIsInstance(obj, TestClass) - self.assertEqual(obj.extra_arg.list_arg, [5]) - - def test_instantiate_other_obj(self): - # do nothing for other obj - self.assertEqual(instantiate(5), 5) - x = [3, 4, 5] - self.assertEqual(instantiate(x), x) - x = TestClass(1) - self.assertIs(instantiate(x), x) - x = {"xx": "yy"} - self.assertIs(instantiate(x), x) - - def test_instantiate_lazy_target(self): - # _target_ is result of instantiate - objconf = L(L(len)(int_arg=3))(call_arg=4) - objconf._target_._target_ = TestClass - self.assertEqual(instantiate(objconf), 7) - - def test_instantiate_lst(self): - lst = [1, 2, L(TestClass)(int_arg=1)] - x = L(TestClass)(int_arg=lst) # list as an argument should be recursively instantiated - x = instantiate(x).int_arg - self.assertEqual(x[:2], [1, 2]) - self.assertIsInstance(x[2], TestClass) - self.assertEqual(x[2].int_arg, 1) - - def test_instantiate_namedtuple(self): - x = L(TestClass)(int_arg=ShapeSpec(channels=1, width=3)) - # test serialization - with tempfile.TemporaryDirectory() as d: - fname = os.path.join(d, "d2_test.yaml") - OmegaConf.save(x, fname) - with open(fname) as f: - x = yaml.unsafe_load(f) - - x = instantiate(x) - self.assertIsInstance(x.int_arg, ShapeSpec) - self.assertEqual(x.int_arg.channels, 1) - - def test_bad_lazycall(self): - with self.assertRaises(Exception): - L(3) - - def test_instantiate_dataclass(self): - a = L(TestDataClass)(x=1, y="s") - a = instantiate(a) - self.assertEqual(a.x, 1) - self.assertEqual(a.y, "s") diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/generate_test_ffhq.sh b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/generate_test_ffhq.sh deleted file mode 100644 index a1b79cb0f3f710eed21a978c3a1489ca830bb7f8..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/generate_test_ffhq.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/usr/bin/env bash - -# paths to data are valid for mml-ws01 -OUT_DIR="/media/inpainting/paper_data/FFHQ_val" - -source "$(dirname $0)/env.sh" - -for datadir in test -do - for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512 - do - "$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-ffhq \ - location.out_dir=$OUT_DIR cropping.out_square_crop=False - - "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats" - done -done diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/utils/data/dataloader.py b/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/utils/data/dataloader.py deleted file mode 100644 index 039b9ec3645b2a4626ff47c221e372f32a6ad339..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/utils/data/dataloader.py +++ /dev/null @@ -1,425 +0,0 @@ -import torch -import torch.multiprocessing as multiprocessing -from torch._C import _set_worker_signal_handlers, \ - _remove_worker_pids, _error_if_any_worker_fails -try: - from torch._C import _set_worker_pids -except: - from torch._C import _update_worker_pids as _set_worker_pids -from .sampler import SequentialSampler, RandomSampler, BatchSampler -import signal -import collections -import re -import sys -import threading -import traceback -from torch._six import string_classes, int_classes -import numpy as np - -if sys.version_info[0] == 2: - import Queue as queue -else: - import queue - - -class ExceptionWrapper(object): - r"Wraps an exception plus traceback to communicate across threads" - - def __init__(self, exc_info): - self.exc_type = exc_info[0] - self.exc_msg = "".join(traceback.format_exception(*exc_info)) - - -_use_shared_memory = False -"""Whether to use shared memory in default_collate""" - - -def _worker_loop(dataset, index_queue, data_queue, collate_fn, seed, init_fn, worker_id): - global _use_shared_memory - _use_shared_memory = True - - # Intialize C side signal handlers for SIGBUS and SIGSEGV. Python signal - # module's handlers are executed after Python returns from C low-level - # handlers, likely when the same fatal signal happened again already. - # https://docs.python.org/3/library/signal.html Sec. 18.8.1.1 - _set_worker_signal_handlers() - - torch.set_num_threads(1) - torch.manual_seed(seed) - np.random.seed(seed) - - if init_fn is not None: - init_fn(worker_id) - - while True: - r = index_queue.get() - if r is None: - break - idx, batch_indices = r - try: - samples = collate_fn([dataset[i] for i in batch_indices]) - except Exception: - data_queue.put((idx, ExceptionWrapper(sys.exc_info()))) - else: - data_queue.put((idx, samples)) - - -def _worker_manager_loop(in_queue, out_queue, done_event, pin_memory, device_id): - if pin_memory: - torch.cuda.set_device(device_id) - - while True: - try: - r = in_queue.get() - except Exception: - if done_event.is_set(): - return - raise - if r is None: - break - if isinstance(r[1], ExceptionWrapper): - out_queue.put(r) - continue - idx, batch = r - try: - if pin_memory: - batch = pin_memory_batch(batch) - except Exception: - out_queue.put((idx, ExceptionWrapper(sys.exc_info()))) - else: - out_queue.put((idx, batch)) - -numpy_type_map = { - 'float64': torch.DoubleTensor, - 'float32': torch.FloatTensor, - 'float16': torch.HalfTensor, - 'int64': torch.LongTensor, - 'int32': torch.IntTensor, - 'int16': torch.ShortTensor, - 'int8': torch.CharTensor, - 'uint8': torch.ByteTensor, -} - - -def default_collate(batch): - "Puts each data field into a tensor with outer dimension batch size" - - error_msg = "batch must contain tensors, numbers, dicts or lists; found {}" - elem_type = type(batch[0]) - if torch.is_tensor(batch[0]): - out = None - if _use_shared_memory: - # If we're in a background process, concatenate directly into a - # shared memory tensor to avoid an extra copy - numel = sum([x.numel() for x in batch]) - storage = batch[0].storage()._new_shared(numel) - out = batch[0].new(storage) - return torch.stack(batch, 0, out=out) - elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \ - and elem_type.__name__ != 'string_': - elem = batch[0] - if elem_type.__name__ == 'ndarray': - # array of string classes and object - if re.search('[SaUO]', elem.dtype.str) is not None: - raise TypeError(error_msg.format(elem.dtype)) - - return torch.stack([torch.from_numpy(b) for b in batch], 0) - if elem.shape == (): # scalars - py_type = float if elem.dtype.name.startswith('float') else int - return numpy_type_map[elem.dtype.name](list(map(py_type, batch))) - elif isinstance(batch[0], int_classes): - return torch.LongTensor(batch) - elif isinstance(batch[0], float): - return torch.DoubleTensor(batch) - elif isinstance(batch[0], string_classes): - return batch - elif isinstance(batch[0], collections.Mapping): - return {key: default_collate([d[key] for d in batch]) for key in batch[0]} - elif isinstance(batch[0], collections.Sequence): - transposed = zip(*batch) - return [default_collate(samples) for samples in transposed] - - raise TypeError((error_msg.format(type(batch[0])))) - - -def pin_memory_batch(batch): - if torch.is_tensor(batch): - return batch.pin_memory() - elif isinstance(batch, string_classes): - return batch - elif isinstance(batch, collections.Mapping): - return {k: pin_memory_batch(sample) for k, sample in batch.items()} - elif isinstance(batch, collections.Sequence): - return [pin_memory_batch(sample) for sample in batch] - else: - return batch - - -_SIGCHLD_handler_set = False -"""Whether SIGCHLD handler is set for DataLoader worker failures. Only one -handler needs to be set for all DataLoaders in a process.""" - - -def _set_SIGCHLD_handler(): - # Windows doesn't support SIGCHLD handler - if sys.platform == 'win32': - return - # can't set signal in child threads - if not isinstance(threading.current_thread(), threading._MainThread): - return - global _SIGCHLD_handler_set - if _SIGCHLD_handler_set: - return - previous_handler = signal.getsignal(signal.SIGCHLD) - if not callable(previous_handler): - previous_handler = None - - def handler(signum, frame): - # This following call uses `waitid` with WNOHANG from C side. Therefore, - # Python can still get and update the process status successfully. - _error_if_any_worker_fails() - if previous_handler is not None: - previous_handler(signum, frame) - - signal.signal(signal.SIGCHLD, handler) - _SIGCHLD_handler_set = True - - -class DataLoaderIter(object): - "Iterates once over the DataLoader's dataset, as specified by the sampler" - - def __init__(self, loader): - self.dataset = loader.dataset - self.collate_fn = loader.collate_fn - self.batch_sampler = loader.batch_sampler - self.num_workers = loader.num_workers - self.pin_memory = loader.pin_memory and torch.cuda.is_available() - self.timeout = loader.timeout - self.done_event = threading.Event() - - self.sample_iter = iter(self.batch_sampler) - - if self.num_workers > 0: - self.worker_init_fn = loader.worker_init_fn - self.index_queue = multiprocessing.SimpleQueue() - self.worker_result_queue = multiprocessing.SimpleQueue() - self.batches_outstanding = 0 - self.worker_pids_set = False - self.shutdown = False - self.send_idx = 0 - self.rcvd_idx = 0 - self.reorder_dict = {} - - base_seed = torch.LongTensor(1).random_(0, 2**31-1)[0] - self.workers = [ - multiprocessing.Process( - target=_worker_loop, - args=(self.dataset, self.index_queue, self.worker_result_queue, self.collate_fn, - base_seed + i, self.worker_init_fn, i)) - for i in range(self.num_workers)] - - if self.pin_memory or self.timeout > 0: - self.data_queue = queue.Queue() - if self.pin_memory: - maybe_device_id = torch.cuda.current_device() - else: - # do not initialize cuda context if not necessary - maybe_device_id = None - self.worker_manager_thread = threading.Thread( - target=_worker_manager_loop, - args=(self.worker_result_queue, self.data_queue, self.done_event, self.pin_memory, - maybe_device_id)) - self.worker_manager_thread.daemon = True - self.worker_manager_thread.start() - else: - self.data_queue = self.worker_result_queue - - for w in self.workers: - w.daemon = True # ensure that the worker exits on process exit - w.start() - - _set_worker_pids(id(self), tuple(w.pid for w in self.workers)) - _set_SIGCHLD_handler() - self.worker_pids_set = True - - # prime the prefetch loop - for _ in range(2 * self.num_workers): - self._put_indices() - - def __len__(self): - return len(self.batch_sampler) - - def _get_batch(self): - if self.timeout > 0: - try: - return self.data_queue.get(timeout=self.timeout) - except queue.Empty: - raise RuntimeError('DataLoader timed out after {} seconds'.format(self.timeout)) - else: - return self.data_queue.get() - - def __next__(self): - if self.num_workers == 0: # same-process loading - indices = next(self.sample_iter) # may raise StopIteration - batch = self.collate_fn([self.dataset[i] for i in indices]) - if self.pin_memory: - batch = pin_memory_batch(batch) - return batch - - # check if the next sample has already been generated - if self.rcvd_idx in self.reorder_dict: - batch = self.reorder_dict.pop(self.rcvd_idx) - return self._process_next_batch(batch) - - if self.batches_outstanding == 0: - self._shutdown_workers() - raise StopIteration - - while True: - assert (not self.shutdown and self.batches_outstanding > 0) - idx, batch = self._get_batch() - self.batches_outstanding -= 1 - if idx != self.rcvd_idx: - # store out-of-order samples - self.reorder_dict[idx] = batch - continue - return self._process_next_batch(batch) - - next = __next__ # Python 2 compatibility - - def __iter__(self): - return self - - def _put_indices(self): - assert self.batches_outstanding < 2 * self.num_workers - indices = next(self.sample_iter, None) - if indices is None: - return - self.index_queue.put((self.send_idx, indices)) - self.batches_outstanding += 1 - self.send_idx += 1 - - def _process_next_batch(self, batch): - self.rcvd_idx += 1 - self._put_indices() - if isinstance(batch, ExceptionWrapper): - raise batch.exc_type(batch.exc_msg) - return batch - - def __getstate__(self): - # TODO: add limited pickling support for sharing an iterator - # across multiple threads for HOGWILD. - # Probably the best way to do this is by moving the sample pushing - # to a separate thread and then just sharing the data queue - # but signalling the end is tricky without a non-blocking API - raise NotImplementedError("DataLoaderIterator cannot be pickled") - - def _shutdown_workers(self): - try: - if not self.shutdown: - self.shutdown = True - self.done_event.set() - # if worker_manager_thread is waiting to put - while not self.data_queue.empty(): - self.data_queue.get() - for _ in self.workers: - self.index_queue.put(None) - # done_event should be sufficient to exit worker_manager_thread, - # but be safe here and put another None - self.worker_result_queue.put(None) - finally: - # removes pids no matter what - if self.worker_pids_set: - _remove_worker_pids(id(self)) - self.worker_pids_set = False - - def __del__(self): - if self.num_workers > 0: - self._shutdown_workers() - - -class DataLoader(object): - """ - Data loader. Combines a dataset and a sampler, and provides - single- or multi-process iterators over the dataset. - - Arguments: - dataset (Dataset): dataset from which to load the data. - batch_size (int, optional): how many samples per batch to load - (default: 1). - shuffle (bool, optional): set to ``True`` to have the data reshuffled - at every epoch (default: False). - sampler (Sampler, optional): defines the strategy to draw samples from - the dataset. If specified, ``shuffle`` must be False. - batch_sampler (Sampler, optional): like sampler, but returns a batch of - indices at a time. Mutually exclusive with batch_size, shuffle, - sampler, and drop_last. - num_workers (int, optional): how many subprocesses to use for data - loading. 0 means that the data will be loaded in the main process. - (default: 0) - collate_fn (callable, optional): merges a list of samples to form a mini-batch. - pin_memory (bool, optional): If ``True``, the data loader will copy tensors - into CUDA pinned memory before returning them. - drop_last (bool, optional): set to ``True`` to drop the last incomplete batch, - if the dataset size is not divisible by the batch size. If ``False`` and - the size of dataset is not divisible by the batch size, then the last batch - will be smaller. (default: False) - timeout (numeric, optional): if positive, the timeout value for collecting a batch - from workers. Should always be non-negative. (default: 0) - worker_init_fn (callable, optional): If not None, this will be called on each - worker subprocess with the worker id (an int in ``[0, num_workers - 1]``) as - input, after seeding and before data loading. (default: None) - - .. note:: By default, each worker will have its PyTorch seed set to - ``base_seed + worker_id``, where ``base_seed`` is a long generated - by main process using its RNG. You may use ``torch.initial_seed()`` to access - this value in :attr:`worker_init_fn`, which can be used to set other seeds - (e.g. NumPy) before data loading. - - .. warning:: If ``spawn'' start method is used, :attr:`worker_init_fn` cannot be an - unpicklable object, e.g., a lambda function. - """ - - def __init__(self, dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, - num_workers=0, collate_fn=default_collate, pin_memory=False, drop_last=False, - timeout=0, worker_init_fn=None): - self.dataset = dataset - self.batch_size = batch_size - self.num_workers = num_workers - self.collate_fn = collate_fn - self.pin_memory = pin_memory - self.drop_last = drop_last - self.timeout = timeout - self.worker_init_fn = worker_init_fn - - if timeout < 0: - raise ValueError('timeout option should be non-negative') - - if batch_sampler is not None: - if batch_size > 1 or shuffle or sampler is not None or drop_last: - raise ValueError('batch_sampler is mutually exclusive with ' - 'batch_size, shuffle, sampler, and drop_last') - - if sampler is not None and shuffle: - raise ValueError('sampler is mutually exclusive with shuffle') - - if self.num_workers < 0: - raise ValueError('num_workers cannot be negative; ' - 'use num_workers=0 to disable multiprocessing.') - - if batch_sampler is None: - if sampler is None: - if shuffle: - sampler = RandomSampler(dataset) - else: - sampler = SequentialSampler(dataset) - batch_sampler = BatchSampler(sampler, batch_size, drop_last) - - self.sampler = sampler - self.batch_sampler = batch_sampler - - def __iter__(self): - return DataLoaderIter(self) - - def __len__(self): - return len(self.batch_sampler) diff --git a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/base_aligner.py b/spaces/OptimalScale/Robin-33b/lmflow/pipeline/base_aligner.py deleted file mode 100644 index c2a640a5d7d68b4b7b917d485dde1395e23dc8a3..0000000000000000000000000000000000000000 --- a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/base_aligner.py +++ /dev/null @@ -1,21 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -""" BaseTuner: a subclass of BasePipeline. -""" - -from lmflow.pipeline.base_pipeline import BasePipeline - - -class BaseAligner(BasePipeline): - """ A subclass of BasePipeline which is alignable. - """ - def __init__(self, *args, **kwargs): - pass - - def _check_if_alignable(self, model, dataset, reward_model): - # TODO: check if the model is alignable and dataset is compatible - # TODO: add reward_model - pass - - def align(self, model, dataset, reward_model): - raise NotImplementedError(".align is not implemented") diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py deleted file mode 100644 index d22ba52640bebd805b3b8d07025e276dfb023759..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='DMHead', - in_channels=2048, - in_index=3, - channels=512, - filter_sizes=(1, 3, 5, 7), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=dict(type='SyncBN', requires_grad=True), - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-rfc822.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-rfc822.go deleted file mode 100644 index b46cfa8837d96d21960b6b9c67a2e7568adc9dcb..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-rfc822.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/backend-library.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/backend-library.go deleted file mode 100644 index fa1ef53a9f5b335a85ec88d11598371ea95ca313..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/backend-library.go and /dev/null differ diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/subprocess.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/subprocess.py deleted file mode 100644 index cf5bf6be1f6ad8d2be99e55f80cbbd110a8b3d7a..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/subprocess.py +++ /dev/null @@ -1,260 +0,0 @@ -import logging -import os -import shlex -import subprocess -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Iterable, - List, - Mapping, - Optional, - Union, -) - -from pip._vendor.rich.markup import escape - -from pip._internal.cli.spinners import SpinnerInterface, open_spinner -from pip._internal.exceptions import InstallationSubprocessError -from pip._internal.utils.logging import VERBOSE, subprocess_logger -from pip._internal.utils.misc import HiddenText - -if TYPE_CHECKING: - # Literal was introduced in Python 3.8. - # - # TODO: Remove `if TYPE_CHECKING` when dropping support for Python 3.7. - from typing import Literal - -CommandArgs = List[Union[str, HiddenText]] - - -def make_command(*args: Union[str, HiddenText, CommandArgs]) -> CommandArgs: - """ - Create a CommandArgs object. - """ - command_args: CommandArgs = [] - for arg in args: - # Check for list instead of CommandArgs since CommandArgs is - # only known during type-checking. - if isinstance(arg, list): - command_args.extend(arg) - else: - # Otherwise, arg is str or HiddenText. - command_args.append(arg) - - return command_args - - -def format_command_args(args: Union[List[str], CommandArgs]) -> str: - """ - Format command arguments for display. - """ - # For HiddenText arguments, display the redacted form by calling str(). - # Also, we don't apply str() to arguments that aren't HiddenText since - # this can trigger a UnicodeDecodeError in Python 2 if the argument - # has type unicode and includes a non-ascii character. (The type - # checker doesn't ensure the annotations are correct in all cases.) - return " ".join( - shlex.quote(str(arg)) if isinstance(arg, HiddenText) else shlex.quote(arg) - for arg in args - ) - - -def reveal_command_args(args: Union[List[str], CommandArgs]) -> List[str]: - """ - Return the arguments in their raw, unredacted form. - """ - return [arg.secret if isinstance(arg, HiddenText) else arg for arg in args] - - -def call_subprocess( - cmd: Union[List[str], CommandArgs], - show_stdout: bool = False, - cwd: Optional[str] = None, - on_returncode: 'Literal["raise", "warn", "ignore"]' = "raise", - extra_ok_returncodes: Optional[Iterable[int]] = None, - extra_environ: Optional[Mapping[str, Any]] = None, - unset_environ: Optional[Iterable[str]] = None, - spinner: Optional[SpinnerInterface] = None, - log_failed_cmd: Optional[bool] = True, - stdout_only: Optional[bool] = False, - *, - command_desc: str, -) -> str: - """ - Args: - show_stdout: if true, use INFO to log the subprocess's stderr and - stdout streams. Otherwise, use DEBUG. Defaults to False. - extra_ok_returncodes: an iterable of integer return codes that are - acceptable, in addition to 0. Defaults to None, which means []. - unset_environ: an iterable of environment variable names to unset - prior to calling subprocess.Popen(). - log_failed_cmd: if false, failed commands are not logged, only raised. - stdout_only: if true, return only stdout, else return both. When true, - logging of both stdout and stderr occurs when the subprocess has - terminated, else logging occurs as subprocess output is produced. - """ - if extra_ok_returncodes is None: - extra_ok_returncodes = [] - if unset_environ is None: - unset_environ = [] - # Most places in pip use show_stdout=False. What this means is-- - # - # - We connect the child's output (combined stderr and stdout) to a - # single pipe, which we read. - # - We log this output to stderr at DEBUG level as it is received. - # - If DEBUG logging isn't enabled (e.g. if --verbose logging wasn't - # requested), then we show a spinner so the user can still see the - # subprocess is in progress. - # - If the subprocess exits with an error, we log the output to stderr - # at ERROR level if it hasn't already been displayed to the console - # (e.g. if --verbose logging wasn't enabled). This way we don't log - # the output to the console twice. - # - # If show_stdout=True, then the above is still done, but with DEBUG - # replaced by INFO. - if show_stdout: - # Then log the subprocess output at INFO level. - log_subprocess: Callable[..., None] = subprocess_logger.info - used_level = logging.INFO - else: - # Then log the subprocess output using VERBOSE. This also ensures - # it will be logged to the log file (aka user_log), if enabled. - log_subprocess = subprocess_logger.verbose - used_level = VERBOSE - - # Whether the subprocess will be visible in the console. - showing_subprocess = subprocess_logger.getEffectiveLevel() <= used_level - - # Only use the spinner if we're not showing the subprocess output - # and we have a spinner. - use_spinner = not showing_subprocess and spinner is not None - - log_subprocess("Running command %s", command_desc) - env = os.environ.copy() - if extra_environ: - env.update(extra_environ) - for name in unset_environ: - env.pop(name, None) - try: - proc = subprocess.Popen( - # Convert HiddenText objects to the underlying str. - reveal_command_args(cmd), - stdin=subprocess.PIPE, - stdout=subprocess.PIPE, - stderr=subprocess.STDOUT if not stdout_only else subprocess.PIPE, - cwd=cwd, - env=env, - errors="backslashreplace", - ) - except Exception as exc: - if log_failed_cmd: - subprocess_logger.critical( - "Error %s while executing command %s", - exc, - command_desc, - ) - raise - all_output = [] - if not stdout_only: - assert proc.stdout - assert proc.stdin - proc.stdin.close() - # In this mode, stdout and stderr are in the same pipe. - while True: - line: str = proc.stdout.readline() - if not line: - break - line = line.rstrip() - all_output.append(line + "\n") - - # Show the line immediately. - log_subprocess(line) - # Update the spinner. - if use_spinner: - assert spinner - spinner.spin() - try: - proc.wait() - finally: - if proc.stdout: - proc.stdout.close() - output = "".join(all_output) - else: - # In this mode, stdout and stderr are in different pipes. - # We must use communicate() which is the only safe way to read both. - out, err = proc.communicate() - # log line by line to preserve pip log indenting - for out_line in out.splitlines(): - log_subprocess(out_line) - all_output.append(out) - for err_line in err.splitlines(): - log_subprocess(err_line) - all_output.append(err) - output = out - - proc_had_error = proc.returncode and proc.returncode not in extra_ok_returncodes - if use_spinner: - assert spinner - if proc_had_error: - spinner.finish("error") - else: - spinner.finish("done") - if proc_had_error: - if on_returncode == "raise": - error = InstallationSubprocessError( - command_description=command_desc, - exit_code=proc.returncode, - output_lines=all_output if not showing_subprocess else None, - ) - if log_failed_cmd: - subprocess_logger.error("[present-rich] %s", error) - subprocess_logger.verbose( - "[bold magenta]full command[/]: [blue]%s[/]", - escape(format_command_args(cmd)), - extra={"markup": True}, - ) - subprocess_logger.verbose( - "[bold magenta]cwd[/]: %s", - escape(cwd or "[inherit]"), - extra={"markup": True}, - ) - - raise error - elif on_returncode == "warn": - subprocess_logger.warning( - 'Command "%s" had error code %s in %s', - command_desc, - proc.returncode, - cwd, - ) - elif on_returncode == "ignore": - pass - else: - raise ValueError(f"Invalid value: on_returncode={on_returncode!r}") - return output - - -def runner_with_spinner_message(message: str) -> Callable[..., None]: - """Provide a subprocess_runner that shows a spinner message. - - Intended for use with for pep517's Pep517HookCaller. Thus, the runner has - an API that matches what's expected by Pep517HookCaller.subprocess_runner. - """ - - def runner( - cmd: List[str], - cwd: Optional[str] = None, - extra_environ: Optional[Mapping[str, Any]] = None, - ) -> None: - with open_spinner(message) as spinner: - call_subprocess( - cmd, - command_desc=message, - cwd=cwd, - extra_environ=extra_environ, - spinner=spinner, - ) - - return runner diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/lexers/python.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/lexers/python.py deleted file mode 100644 index c24e3c86ef2a991227fd87fa447eb433c51c1e0e..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/lexers/python.py +++ /dev/null @@ -1,1204 +0,0 @@ -""" - pygments.lexers.python - ~~~~~~~~~~~~~~~~~~~~~~ - - Lexers for Python and related languages. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -import keyword - -from pip._vendor.pygments.lexer import Lexer, RegexLexer, include, bygroups, using, \ - default, words, combined, do_insertions, this -from pip._vendor.pygments.util import get_bool_opt, shebang_matches -from pip._vendor.pygments.token import Text, Comment, Operator, Keyword, Name, String, \ - Number, Punctuation, Generic, Other, Error -from pip._vendor.pygments import unistring as uni - -__all__ = ['PythonLexer', 'PythonConsoleLexer', 'PythonTracebackLexer', - 'Python2Lexer', 'Python2TracebackLexer', - 'CythonLexer', 'DgLexer', 'NumPyLexer'] - -line_re = re.compile('.*?\n') - - -class PythonLexer(RegexLexer): - """ - For Python source code (version 3.x). - - .. versionadded:: 0.10 - - .. versionchanged:: 2.5 - This is now the default ``PythonLexer``. It is still available as the - alias ``Python3Lexer``. - """ - - name = 'Python' - url = 'http://www.python.org' - aliases = ['python', 'py', 'sage', 'python3', 'py3'] - filenames = [ - '*.py', - '*.pyw', - # Jython - '*.jy', - # Sage - '*.sage', - # SCons - '*.sc', - 'SConstruct', - 'SConscript', - # Skylark/Starlark (used by Bazel, Buck, and Pants) - '*.bzl', - 'BUCK', - 'BUILD', - 'BUILD.bazel', - 'WORKSPACE', - # Twisted Application infrastructure - '*.tac', - ] - mimetypes = ['text/x-python', 'application/x-python', - 'text/x-python3', 'application/x-python3'] - - uni_name = "[%s][%s]*" % (uni.xid_start, uni.xid_continue) - - def innerstring_rules(ttype): - return [ - # the old style '%s' % (...) string formatting (still valid in Py3) - (r'%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?' - '[hlL]?[E-GXc-giorsaux%]', String.Interpol), - # the new style '{}'.format(...) string formatting - (r'\{' - r'((\w+)((\.\w+)|(\[[^\]]+\]))*)?' # field name - r'(\![sra])?' # conversion - r'(\:(.?[<>=\^])?[-+ ]?#?0?(\d+)?,?(\.\d+)?[E-GXb-gnosx%]?)?' - r'\}', String.Interpol), - - # backslashes, quotes and formatting signs must be parsed one at a time - (r'[^\\\'"%{\n]+', ttype), - (r'[\'"\\]', ttype), - # unhandled string formatting sign - (r'%|(\{{1,2})', ttype) - # newlines are an error (use "nl" state) - ] - - def fstring_rules(ttype): - return [ - # Assuming that a '}' is the closing brace after format specifier. - # Sadly, this means that we won't detect syntax error. But it's - # more important to parse correct syntax correctly, than to - # highlight invalid syntax. - (r'\}', String.Interpol), - (r'\{', String.Interpol, 'expr-inside-fstring'), - # backslashes, quotes and formatting signs must be parsed one at a time - (r'[^\\\'"{}\n]+', ttype), - (r'[\'"\\]', ttype), - # newlines are an error (use "nl" state) - ] - - tokens = { - 'root': [ - (r'\n', Text), - (r'^(\s*)([rRuUbB]{,2})("""(?:.|\n)*?""")', - bygroups(Text, String.Affix, String.Doc)), - (r"^(\s*)([rRuUbB]{,2})('''(?:.|\n)*?''')", - bygroups(Text, String.Affix, String.Doc)), - (r'\A#!.+$', Comment.Hashbang), - (r'#.*$', Comment.Single), - (r'\\\n', Text), - (r'\\', Text), - include('keywords'), - include('soft-keywords'), - (r'(def)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'funcname'), - (r'(class)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'classname'), - (r'(from)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text), - 'fromimport'), - (r'(import)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text), - 'import'), - include('expr'), - ], - 'expr': [ - # raw f-strings - ('(?i)(rf|fr)(""")', - bygroups(String.Affix, String.Double), - combined('rfstringescape', 'tdqf')), - ("(?i)(rf|fr)(''')", - bygroups(String.Affix, String.Single), - combined('rfstringescape', 'tsqf')), - ('(?i)(rf|fr)(")', - bygroups(String.Affix, String.Double), - combined('rfstringescape', 'dqf')), - ("(?i)(rf|fr)(')", - bygroups(String.Affix, String.Single), - combined('rfstringescape', 'sqf')), - # non-raw f-strings - ('([fF])(""")', bygroups(String.Affix, String.Double), - combined('fstringescape', 'tdqf')), - ("([fF])(''')", bygroups(String.Affix, String.Single), - combined('fstringescape', 'tsqf')), - ('([fF])(")', bygroups(String.Affix, String.Double), - combined('fstringescape', 'dqf')), - ("([fF])(')", bygroups(String.Affix, String.Single), - combined('fstringescape', 'sqf')), - # raw bytes and strings - ('(?i)(rb|br|r)(""")', - bygroups(String.Affix, String.Double), 'tdqs'), - ("(?i)(rb|br|r)(''')", - bygroups(String.Affix, String.Single), 'tsqs'), - ('(?i)(rb|br|r)(")', - bygroups(String.Affix, String.Double), 'dqs'), - ("(?i)(rb|br|r)(')", - bygroups(String.Affix, String.Single), 'sqs'), - # non-raw strings - ('([uU]?)(""")', bygroups(String.Affix, String.Double), - combined('stringescape', 'tdqs')), - ("([uU]?)(''')", bygroups(String.Affix, String.Single), - combined('stringescape', 'tsqs')), - ('([uU]?)(")', bygroups(String.Affix, String.Double), - combined('stringescape', 'dqs')), - ("([uU]?)(')", bygroups(String.Affix, String.Single), - combined('stringescape', 'sqs')), - # non-raw bytes - ('([bB])(""")', bygroups(String.Affix, String.Double), - combined('bytesescape', 'tdqs')), - ("([bB])(''')", bygroups(String.Affix, String.Single), - combined('bytesescape', 'tsqs')), - ('([bB])(")', bygroups(String.Affix, String.Double), - combined('bytesescape', 'dqs')), - ("([bB])(')", bygroups(String.Affix, String.Single), - combined('bytesescape', 'sqs')), - - (r'[^\S\n]+', Text), - include('numbers'), - (r'!=|==|<<|>>|:=|[-~+/*%=<>&^|.]', Operator), - (r'[]{}:(),;[]', Punctuation), - (r'(in|is|and|or|not)\b', Operator.Word), - include('expr-keywords'), - include('builtins'), - include('magicfuncs'), - include('magicvars'), - include('name'), - ], - 'expr-inside-fstring': [ - (r'[{([]', Punctuation, 'expr-inside-fstring-inner'), - # without format specifier - (r'(=\s*)?' # debug (https://bugs.python.org/issue36817) - r'(\![sraf])?' # conversion - r'\}', String.Interpol, '#pop'), - # with format specifier - # we'll catch the remaining '}' in the outer scope - (r'(=\s*)?' # debug (https://bugs.python.org/issue36817) - r'(\![sraf])?' # conversion - r':', String.Interpol, '#pop'), - (r'\s+', Text), # allow new lines - include('expr'), - ], - 'expr-inside-fstring-inner': [ - (r'[{([]', Punctuation, 'expr-inside-fstring-inner'), - (r'[])}]', Punctuation, '#pop'), - (r'\s+', Text), # allow new lines - include('expr'), - ], - 'expr-keywords': [ - # Based on https://docs.python.org/3/reference/expressions.html - (words(( - 'async for', 'await', 'else', 'for', 'if', 'lambda', - 'yield', 'yield from'), suffix=r'\b'), - Keyword), - (words(('True', 'False', 'None'), suffix=r'\b'), Keyword.Constant), - ], - 'keywords': [ - (words(( - 'assert', 'async', 'await', 'break', 'continue', 'del', 'elif', - 'else', 'except', 'finally', 'for', 'global', 'if', 'lambda', - 'pass', 'raise', 'nonlocal', 'return', 'try', 'while', 'yield', - 'yield from', 'as', 'with'), suffix=r'\b'), - Keyword), - (words(('True', 'False', 'None'), suffix=r'\b'), Keyword.Constant), - ], - 'soft-keywords': [ - # `match`, `case` and `_` soft keywords - (r'(^[ \t]*)' # at beginning of line + possible indentation - r'(match|case)\b' # a possible keyword - r'(?![ \t]*(?:' # not followed by... - r'[:,;=^&|@~)\]}]|(?:' + # characters and keywords that mean this isn't - r'|'.join(keyword.kwlist) + r')\b))', # pattern matching - bygroups(Text, Keyword), 'soft-keywords-inner'), - ], - 'soft-keywords-inner': [ - # optional `_` keyword - (r'(\s+)([^\n_]*)(_\b)', bygroups(Text, using(this), Keyword)), - default('#pop') - ], - 'builtins': [ - (words(( - '__import__', 'abs', 'all', 'any', 'bin', 'bool', 'bytearray', - 'breakpoint', 'bytes', 'chr', 'classmethod', 'compile', 'complex', - 'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'filter', - 'float', 'format', 'frozenset', 'getattr', 'globals', 'hasattr', - 'hash', 'hex', 'id', 'input', 'int', 'isinstance', 'issubclass', - 'iter', 'len', 'list', 'locals', 'map', 'max', 'memoryview', - 'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'print', - 'property', 'range', 'repr', 'reversed', 'round', 'set', 'setattr', - 'slice', 'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', - 'type', 'vars', 'zip'), prefix=r'(?>|[-~+/*%=<>&^|.]', Operator), - include('keywords'), - (r'(def)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'funcname'), - (r'(class)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'classname'), - (r'(from)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text), - 'fromimport'), - (r'(import)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text), - 'import'), - include('builtins'), - include('magicfuncs'), - include('magicvars'), - include('backtick'), - ('([rR]|[uUbB][rR]|[rR][uUbB])(""")', - bygroups(String.Affix, String.Double), 'tdqs'), - ("([rR]|[uUbB][rR]|[rR][uUbB])(''')", - bygroups(String.Affix, String.Single), 'tsqs'), - ('([rR]|[uUbB][rR]|[rR][uUbB])(")', - bygroups(String.Affix, String.Double), 'dqs'), - ("([rR]|[uUbB][rR]|[rR][uUbB])(')", - bygroups(String.Affix, String.Single), 'sqs'), - ('([uUbB]?)(""")', bygroups(String.Affix, String.Double), - combined('stringescape', 'tdqs')), - ("([uUbB]?)(''')", bygroups(String.Affix, String.Single), - combined('stringescape', 'tsqs')), - ('([uUbB]?)(")', bygroups(String.Affix, String.Double), - combined('stringescape', 'dqs')), - ("([uUbB]?)(')", bygroups(String.Affix, String.Single), - combined('stringescape', 'sqs')), - include('name'), - include('numbers'), - ], - 'keywords': [ - (words(( - 'assert', 'break', 'continue', 'del', 'elif', 'else', 'except', - 'exec', 'finally', 'for', 'global', 'if', 'lambda', 'pass', - 'print', 'raise', 'return', 'try', 'while', 'yield', - 'yield from', 'as', 'with'), suffix=r'\b'), - Keyword), - ], - 'builtins': [ - (words(( - '__import__', 'abs', 'all', 'any', 'apply', 'basestring', 'bin', - 'bool', 'buffer', 'bytearray', 'bytes', 'callable', 'chr', 'classmethod', - 'cmp', 'coerce', 'compile', 'complex', 'delattr', 'dict', 'dir', 'divmod', - 'enumerate', 'eval', 'execfile', 'exit', 'file', 'filter', 'float', - 'frozenset', 'getattr', 'globals', 'hasattr', 'hash', 'hex', 'id', - 'input', 'int', 'intern', 'isinstance', 'issubclass', 'iter', 'len', - 'list', 'locals', 'long', 'map', 'max', 'min', 'next', 'object', - 'oct', 'open', 'ord', 'pow', 'property', 'range', 'raw_input', 'reduce', - 'reload', 'repr', 'reversed', 'round', 'set', 'setattr', 'slice', - 'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type', - 'unichr', 'unicode', 'vars', 'xrange', 'zip'), - prefix=r'(?>> a = 'foo' - >>> print a - foo - >>> 1 / 0 - Traceback (most recent call last): - File "", line 1, in - ZeroDivisionError: integer division or modulo by zero - - Additional options: - - `python3` - Use Python 3 lexer for code. Default is ``True``. - - .. versionadded:: 1.0 - .. versionchanged:: 2.5 - Now defaults to ``True``. - """ - name = 'Python console session' - aliases = ['pycon'] - mimetypes = ['text/x-python-doctest'] - - def __init__(self, **options): - self.python3 = get_bool_opt(options, 'python3', True) - Lexer.__init__(self, **options) - - def get_tokens_unprocessed(self, text): - if self.python3: - pylexer = PythonLexer(**self.options) - tblexer = PythonTracebackLexer(**self.options) - else: - pylexer = Python2Lexer(**self.options) - tblexer = Python2TracebackLexer(**self.options) - - curcode = '' - insertions = [] - curtb = '' - tbindex = 0 - tb = 0 - for match in line_re.finditer(text): - line = match.group() - if line.startswith('>>> ') or line.startswith('... '): - tb = 0 - insertions.append((len(curcode), - [(0, Generic.Prompt, line[:4])])) - curcode += line[4:] - elif line.rstrip() == '...' and not tb: - # only a new >>> prompt can end an exception block - # otherwise an ellipsis in place of the traceback frames - # will be mishandled - insertions.append((len(curcode), - [(0, Generic.Prompt, '...')])) - curcode += line[3:] - else: - if curcode: - yield from do_insertions( - insertions, pylexer.get_tokens_unprocessed(curcode)) - curcode = '' - insertions = [] - if (line.startswith('Traceback (most recent call last):') or - re.match(' File "[^"]+", line \\d+\\n$', line)): - tb = 1 - curtb = line - tbindex = match.start() - elif line == 'KeyboardInterrupt\n': - yield match.start(), Name.Class, line - elif tb: - curtb += line - if not (line.startswith(' ') or line.strip() == '...'): - tb = 0 - for i, t, v in tblexer.get_tokens_unprocessed(curtb): - yield tbindex+i, t, v - curtb = '' - else: - yield match.start(), Generic.Output, line - if curcode: - yield from do_insertions(insertions, - pylexer.get_tokens_unprocessed(curcode)) - if curtb: - for i, t, v in tblexer.get_tokens_unprocessed(curtb): - yield tbindex+i, t, v - - -class PythonTracebackLexer(RegexLexer): - """ - For Python 3.x tracebacks, with support for chained exceptions. - - .. versionadded:: 1.0 - - .. versionchanged:: 2.5 - This is now the default ``PythonTracebackLexer``. It is still available - as the alias ``Python3TracebackLexer``. - """ - - name = 'Python Traceback' - aliases = ['pytb', 'py3tb'] - filenames = ['*.pytb', '*.py3tb'] - mimetypes = ['text/x-python-traceback', 'text/x-python3-traceback'] - - tokens = { - 'root': [ - (r'\n', Text), - (r'^Traceback \(most recent call last\):\n', Generic.Traceback, 'intb'), - (r'^During handling of the above exception, another ' - r'exception occurred:\n\n', Generic.Traceback), - (r'^The above exception was the direct cause of the ' - r'following exception:\n\n', Generic.Traceback), - (r'^(?= File "[^"]+", line \d+)', Generic.Traceback, 'intb'), - (r'^.*\n', Other), - ], - 'intb': [ - (r'^( File )("[^"]+")(, line )(\d+)(, in )(.+)(\n)', - bygroups(Text, Name.Builtin, Text, Number, Text, Name, Text)), - (r'^( File )("[^"]+")(, line )(\d+)(\n)', - bygroups(Text, Name.Builtin, Text, Number, Text)), - (r'^( )(.+)(\n)', - bygroups(Text, using(PythonLexer), Text), 'markers'), - (r'^([ \t]*)(\.\.\.)(\n)', - bygroups(Text, Comment, Text)), # for doctests... - (r'^([^:]+)(: )(.+)(\n)', - bygroups(Generic.Error, Text, Name, Text), '#pop'), - (r'^([a-zA-Z_][\w.]*)(:?\n)', - bygroups(Generic.Error, Text), '#pop') - ], - 'markers': [ - # Either `PEP 657 ` - # error locations in Python 3.11+, or single-caret markers - # for syntax errors before that. - (r'^( {4,})([~^]+)(\n)', - bygroups(Text, Punctuation.Marker, Text), - '#pop'), - default('#pop'), - ], - } - - -Python3TracebackLexer = PythonTracebackLexer - - -class Python2TracebackLexer(RegexLexer): - """ - For Python tracebacks. - - .. versionadded:: 0.7 - - .. versionchanged:: 2.5 - This class has been renamed from ``PythonTracebackLexer``. - ``PythonTracebackLexer`` now refers to the Python 3 variant. - """ - - name = 'Python 2.x Traceback' - aliases = ['py2tb'] - filenames = ['*.py2tb'] - mimetypes = ['text/x-python2-traceback'] - - tokens = { - 'root': [ - # Cover both (most recent call last) and (innermost last) - # The optional ^C allows us to catch keyboard interrupt signals. - (r'^(\^C)?(Traceback.*\n)', - bygroups(Text, Generic.Traceback), 'intb'), - # SyntaxError starts with this. - (r'^(?= File "[^"]+", line \d+)', Generic.Traceback, 'intb'), - (r'^.*\n', Other), - ], - 'intb': [ - (r'^( File )("[^"]+")(, line )(\d+)(, in )(.+)(\n)', - bygroups(Text, Name.Builtin, Text, Number, Text, Name, Text)), - (r'^( File )("[^"]+")(, line )(\d+)(\n)', - bygroups(Text, Name.Builtin, Text, Number, Text)), - (r'^( )(.+)(\n)', - bygroups(Text, using(Python2Lexer), Text), 'marker'), - (r'^([ \t]*)(\.\.\.)(\n)', - bygroups(Text, Comment, Text)), # for doctests... - (r'^([^:]+)(: )(.+)(\n)', - bygroups(Generic.Error, Text, Name, Text), '#pop'), - (r'^([a-zA-Z_]\w*)(:?\n)', - bygroups(Generic.Error, Text), '#pop') - ], - 'marker': [ - # For syntax errors. - (r'( {4,})(\^)', bygroups(Text, Punctuation.Marker), '#pop'), - default('#pop'), - ], - } - - -class CythonLexer(RegexLexer): - """ - For Pyrex and Cython source code. - - .. versionadded:: 1.1 - """ - - name = 'Cython' - url = 'http://cython.org' - aliases = ['cython', 'pyx', 'pyrex'] - filenames = ['*.pyx', '*.pxd', '*.pxi'] - mimetypes = ['text/x-cython', 'application/x-cython'] - - tokens = { - 'root': [ - (r'\n', Text), - (r'^(\s*)("""(?:.|\n)*?""")', bygroups(Text, String.Doc)), - (r"^(\s*)('''(?:.|\n)*?''')", bygroups(Text, String.Doc)), - (r'[^\S\n]+', Text), - (r'#.*$', Comment), - (r'[]{}:(),;[]', Punctuation), - (r'\\\n', Text), - (r'\\', Text), - (r'(in|is|and|or|not)\b', Operator.Word), - (r'(<)([a-zA-Z0-9.?]+)(>)', - bygroups(Punctuation, Keyword.Type, Punctuation)), - (r'!=|==|<<|>>|[-~+/*%=<>&^|.?]', Operator), - (r'(from)(\d+)(<=)(\s+)(<)(\d+)(:)', - bygroups(Keyword, Number.Integer, Operator, Name, Operator, - Name, Punctuation)), - include('keywords'), - (r'(def|property)(\s+)', bygroups(Keyword, Text), 'funcname'), - (r'(cp?def)(\s+)', bygroups(Keyword, Text), 'cdef'), - # (should actually start a block with only cdefs) - (r'(cdef)(:)', bygroups(Keyword, Punctuation)), - (r'(class|struct)(\s+)', bygroups(Keyword, Text), 'classname'), - (r'(from)(\s+)', bygroups(Keyword, Text), 'fromimport'), - (r'(c?import)(\s+)', bygroups(Keyword, Text), 'import'), - include('builtins'), - include('backtick'), - ('(?:[rR]|[uU][rR]|[rR][uU])"""', String, 'tdqs'), - ("(?:[rR]|[uU][rR]|[rR][uU])'''", String, 'tsqs'), - ('(?:[rR]|[uU][rR]|[rR][uU])"', String, 'dqs'), - ("(?:[rR]|[uU][rR]|[rR][uU])'", String, 'sqs'), - ('[uU]?"""', String, combined('stringescape', 'tdqs')), - ("[uU]?'''", String, combined('stringescape', 'tsqs')), - ('[uU]?"', String, combined('stringescape', 'dqs')), - ("[uU]?'", String, combined('stringescape', 'sqs')), - include('name'), - include('numbers'), - ], - 'keywords': [ - (words(( - 'assert', 'async', 'await', 'break', 'by', 'continue', 'ctypedef', 'del', 'elif', - 'else', 'except', 'except?', 'exec', 'finally', 'for', 'fused', 'gil', - 'global', 'if', 'include', 'lambda', 'nogil', 'pass', 'print', - 'raise', 'return', 'try', 'while', 'yield', 'as', 'with'), suffix=r'\b'), - Keyword), - (r'(DEF|IF|ELIF|ELSE)\b', Comment.Preproc), - ], - 'builtins': [ - (words(( - '__import__', 'abs', 'all', 'any', 'apply', 'basestring', 'bin', 'bint', - 'bool', 'buffer', 'bytearray', 'bytes', 'callable', 'chr', - 'classmethod', 'cmp', 'coerce', 'compile', 'complex', 'delattr', - 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'execfile', 'exit', - 'file', 'filter', 'float', 'frozenset', 'getattr', 'globals', - 'hasattr', 'hash', 'hex', 'id', 'input', 'int', 'intern', 'isinstance', - 'issubclass', 'iter', 'len', 'list', 'locals', 'long', 'map', 'max', - 'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'property', 'Py_ssize_t', - 'range', 'raw_input', 'reduce', 'reload', 'repr', 'reversed', - 'round', 'set', 'setattr', 'slice', 'sorted', 'staticmethod', - 'str', 'sum', 'super', 'tuple', 'type', 'unichr', 'unicode', 'unsigned', - 'vars', 'xrange', 'zip'), prefix=r'(? None: - """After call strategy that does nothing.""" - - -def after_log( - logger: "logging.Logger", - log_level: int, - sec_format: str = "%0.3f", -) -> typing.Callable[["RetryCallState"], None]: - """After call strategy that logs to some logger the finished attempt.""" - - def log_it(retry_state: "RetryCallState") -> None: - logger.log( - log_level, - f"Finished call to '{_utils.get_callback_name(retry_state.fn)}' " - f"after {sec_format % retry_state.seconds_since_start}(s), " - f"this was the {_utils.to_ordinal(retry_state.attempt_number)} time calling it.", - ) - - return log_it diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/zipp.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/zipp.py deleted file mode 100644 index 26b723c1fd3e25740e0268b8c9b50905c58c3d4a..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/zipp.py +++ /dev/null @@ -1,329 +0,0 @@ -import io -import posixpath -import zipfile -import itertools -import contextlib -import sys -import pathlib - -if sys.version_info < (3, 7): - from collections import OrderedDict -else: - OrderedDict = dict - - -__all__ = ['Path'] - - -def _parents(path): - """ - Given a path with elements separated by - posixpath.sep, generate all parents of that path. - - >>> list(_parents('b/d')) - ['b'] - >>> list(_parents('/b/d/')) - ['/b'] - >>> list(_parents('b/d/f/')) - ['b/d', 'b'] - >>> list(_parents('b')) - [] - >>> list(_parents('')) - [] - """ - return itertools.islice(_ancestry(path), 1, None) - - -def _ancestry(path): - """ - Given a path with elements separated by - posixpath.sep, generate all elements of that path - - >>> list(_ancestry('b/d')) - ['b/d', 'b'] - >>> list(_ancestry('/b/d/')) - ['/b/d', '/b'] - >>> list(_ancestry('b/d/f/')) - ['b/d/f', 'b/d', 'b'] - >>> list(_ancestry('b')) - ['b'] - >>> list(_ancestry('')) - [] - """ - path = path.rstrip(posixpath.sep) - while path and path != posixpath.sep: - yield path - path, tail = posixpath.split(path) - - -_dedupe = OrderedDict.fromkeys -"""Deduplicate an iterable in original order""" - - -def _difference(minuend, subtrahend): - """ - Return items in minuend not in subtrahend, retaining order - with O(1) lookup. - """ - return itertools.filterfalse(set(subtrahend).__contains__, minuend) - - -class CompleteDirs(zipfile.ZipFile): - """ - A ZipFile subclass that ensures that implied directories - are always included in the namelist. - """ - - @staticmethod - def _implied_dirs(names): - parents = itertools.chain.from_iterable(map(_parents, names)) - as_dirs = (p + posixpath.sep for p in parents) - return _dedupe(_difference(as_dirs, names)) - - def namelist(self): - names = super(CompleteDirs, self).namelist() - return names + list(self._implied_dirs(names)) - - def _name_set(self): - return set(self.namelist()) - - def resolve_dir(self, name): - """ - If the name represents a directory, return that name - as a directory (with the trailing slash). - """ - names = self._name_set() - dirname = name + '/' - dir_match = name not in names and dirname in names - return dirname if dir_match else name - - @classmethod - def make(cls, source): - """ - Given a source (filename or zipfile), return an - appropriate CompleteDirs subclass. - """ - if isinstance(source, CompleteDirs): - return source - - if not isinstance(source, zipfile.ZipFile): - return cls(_pathlib_compat(source)) - - # Only allow for FastLookup when supplied zipfile is read-only - if 'r' not in source.mode: - cls = CompleteDirs - - source.__class__ = cls - return source - - -class FastLookup(CompleteDirs): - """ - ZipFile subclass to ensure implicit - dirs exist and are resolved rapidly. - """ - - def namelist(self): - with contextlib.suppress(AttributeError): - return self.__names - self.__names = super(FastLookup, self).namelist() - return self.__names - - def _name_set(self): - with contextlib.suppress(AttributeError): - return self.__lookup - self.__lookup = super(FastLookup, self)._name_set() - return self.__lookup - - -def _pathlib_compat(path): - """ - For path-like objects, convert to a filename for compatibility - on Python 3.6.1 and earlier. - """ - try: - return path.__fspath__() - except AttributeError: - return str(path) - - -class Path: - """ - A pathlib-compatible interface for zip files. - - Consider a zip file with this structure:: - - . - ├── a.txt - └── b - ├── c.txt - └── d - └── e.txt - - >>> data = io.BytesIO() - >>> zf = zipfile.ZipFile(data, 'w') - >>> zf.writestr('a.txt', 'content of a') - >>> zf.writestr('b/c.txt', 'content of c') - >>> zf.writestr('b/d/e.txt', 'content of e') - >>> zf.filename = 'mem/abcde.zip' - - Path accepts the zipfile object itself or a filename - - >>> root = Path(zf) - - From there, several path operations are available. - - Directory iteration (including the zip file itself): - - >>> a, b = root.iterdir() - >>> a - Path('mem/abcde.zip', 'a.txt') - >>> b - Path('mem/abcde.zip', 'b/') - - name property: - - >>> b.name - 'b' - - join with divide operator: - - >>> c = b / 'c.txt' - >>> c - Path('mem/abcde.zip', 'b/c.txt') - >>> c.name - 'c.txt' - - Read text: - - >>> c.read_text() - 'content of c' - - existence: - - >>> c.exists() - True - >>> (b / 'missing.txt').exists() - False - - Coercion to string: - - >>> import os - >>> str(c).replace(os.sep, posixpath.sep) - 'mem/abcde.zip/b/c.txt' - - At the root, ``name``, ``filename``, and ``parent`` - resolve to the zipfile. Note these attributes are not - valid and will raise a ``ValueError`` if the zipfile - has no filename. - - >>> root.name - 'abcde.zip' - >>> str(root.filename).replace(os.sep, posixpath.sep) - 'mem/abcde.zip' - >>> str(root.parent) - 'mem' - """ - - __repr = "{self.__class__.__name__}({self.root.filename!r}, {self.at!r})" - - def __init__(self, root, at=""): - """ - Construct a Path from a ZipFile or filename. - - Note: When the source is an existing ZipFile object, - its type (__class__) will be mutated to a - specialized type. If the caller wishes to retain the - original type, the caller should either create a - separate ZipFile object or pass a filename. - """ - self.root = FastLookup.make(root) - self.at = at - - def open(self, mode='r', *args, pwd=None, **kwargs): - """ - Open this entry as text or binary following the semantics - of ``pathlib.Path.open()`` by passing arguments through - to io.TextIOWrapper(). - """ - if self.is_dir(): - raise IsADirectoryError(self) - zip_mode = mode[0] - if not self.exists() and zip_mode == 'r': - raise FileNotFoundError(self) - stream = self.root.open(self.at, zip_mode, pwd=pwd) - if 'b' in mode: - if args or kwargs: - raise ValueError("encoding args invalid for binary operation") - return stream - return io.TextIOWrapper(stream, *args, **kwargs) - - @property - def name(self): - return pathlib.Path(self.at).name or self.filename.name - - @property - def suffix(self): - return pathlib.Path(self.at).suffix or self.filename.suffix - - @property - def suffixes(self): - return pathlib.Path(self.at).suffixes or self.filename.suffixes - - @property - def stem(self): - return pathlib.Path(self.at).stem or self.filename.stem - - @property - def filename(self): - return pathlib.Path(self.root.filename).joinpath(self.at) - - def read_text(self, *args, **kwargs): - with self.open('r', *args, **kwargs) as strm: - return strm.read() - - def read_bytes(self): - with self.open('rb') as strm: - return strm.read() - - def _is_child(self, path): - return posixpath.dirname(path.at.rstrip("/")) == self.at.rstrip("/") - - def _next(self, at): - return self.__class__(self.root, at) - - def is_dir(self): - return not self.at or self.at.endswith("/") - - def is_file(self): - return self.exists() and not self.is_dir() - - def exists(self): - return self.at in self.root._name_set() - - def iterdir(self): - if not self.is_dir(): - raise ValueError("Can't listdir a file") - subs = map(self._next, self.root.namelist()) - return filter(self._is_child, subs) - - def __str__(self): - return posixpath.join(self.root.filename, self.at) - - def __repr__(self): - return self.__repr.format(self=self) - - def joinpath(self, *other): - next = posixpath.join(self.at, *map(_pathlib_compat, other)) - return self._next(self.root.resolve_dir(next)) - - __truediv__ = joinpath - - @property - def parent(self): - if not self.at: - return self.filename.parent - parent_at = posixpath.dirname(self.at.rstrip('/')) - if parent_at: - parent_at += '/' - return self._next(parent_at) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/resnet.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/resnet.py deleted file mode 100644 index 1cb3ac057ee2d52c46fc94685b5d4e698aad8d5f..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/resnet.py +++ /dev/null @@ -1,316 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn -import torch.utils.checkpoint as cp - -from .utils import constant_init, kaiming_init - - -def conv3x3(in_planes, out_planes, stride=1, dilation=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - super(BasicBlock, self).__init__() - assert style in ['pytorch', 'caffe'] - self.conv1 = conv3x3(inplanes, planes, stride, dilation) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - assert not with_cp - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - """Bottleneck block. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__() - assert style in ['pytorch', 'caffe'] - if style == 'pytorch': - conv1_stride = 1 - conv2_stride = stride - else: - conv1_stride = stride - conv2_stride = 1 - self.conv1 = nn.Conv2d( - inplanes, planes, kernel_size=1, stride=conv1_stride, bias=False) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - stride=conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.bn1 = nn.BatchNorm2d(planes) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - def forward(self, x): - - def _inner_forward(x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -def make_res_layer(block, - inplanes, - planes, - blocks, - stride=1, - dilation=1, - style='pytorch', - with_cp=False): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append( - block( - inplanes, - planes, - stride, - dilation, - downsample, - style=style, - with_cp=with_cp)) - inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block(inplanes, planes, 1, dilation, style=style, with_cp=with_cp)) - - return nn.Sequential(*layers) - - -class ResNet(nn.Module): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - num_stages (int): Resnet stages, normally 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - frozen_stages=-1, - bn_eval=True, - bn_frozen=False, - with_cp=False): - super(ResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - assert num_stages >= 1 and num_stages <= 4 - block, stage_blocks = self.arch_settings[depth] - stage_blocks = stage_blocks[:num_stages] - assert len(strides) == len(dilations) == num_stages - assert max(out_indices) < num_stages - - self.out_indices = out_indices - self.style = style - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - self.with_cp = with_cp - - self.inplanes = 64 - self.conv1 = nn.Conv2d( - 3, 64, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - self.res_layers = [] - for i, num_blocks in enumerate(stage_blocks): - stride = strides[i] - dilation = dilations[i] - planes = 64 * 2**i - res_layer = make_res_layer( - block, - self.inplanes, - planes, - num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - with_cp=with_cp) - self.inplanes = planes * block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self.feat_dim = block.expansion * 64 * 2**(len(stage_blocks) - 1) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode=True): - super(ResNet, self).train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - if mode and self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for param in self.bn1.parameters(): - param.requires_grad = False - self.bn1.eval() - self.bn1.weight.requires_grad = False - self.bn1.bias.requires_grad = False - for i in range(1, self.frozen_stages + 1): - mod = getattr(self, f'layer{i}') - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/lr_updater.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/lr_updater.py deleted file mode 100644 index 6365908ddf6070086de2ffc0afada46ed2f32256..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/lr_updater.py +++ /dev/null @@ -1,670 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers -from math import cos, pi - -import annotator.uniformer.mmcv as mmcv -from .hook import HOOKS, Hook - - -class LrUpdaterHook(Hook): - """LR Scheduler in MMCV. - - Args: - by_epoch (bool): LR changes epoch by epoch - warmup (string): Type of warmup used. It can be None(use no warmup), - 'constant', 'linear' or 'exp' - warmup_iters (int): The number of iterations or epochs that warmup - lasts - warmup_ratio (float): LR used at the beginning of warmup equals to - warmup_ratio * initial_lr - warmup_by_epoch (bool): When warmup_by_epoch == True, warmup_iters - means the number of epochs that warmup lasts, otherwise means the - number of iteration that warmup lasts - """ - - def __init__(self, - by_epoch=True, - warmup=None, - warmup_iters=0, - warmup_ratio=0.1, - warmup_by_epoch=False): - # validate the "warmup" argument - if warmup is not None: - if warmup not in ['constant', 'linear', 'exp']: - raise ValueError( - f'"{warmup}" is not a supported type for warming up, valid' - ' types are "constant" and "linear"') - if warmup is not None: - assert warmup_iters > 0, \ - '"warmup_iters" must be a positive integer' - assert 0 < warmup_ratio <= 1.0, \ - '"warmup_ratio" must be in range (0,1]' - - self.by_epoch = by_epoch - self.warmup = warmup - self.warmup_iters = warmup_iters - self.warmup_ratio = warmup_ratio - self.warmup_by_epoch = warmup_by_epoch - - if self.warmup_by_epoch: - self.warmup_epochs = self.warmup_iters - self.warmup_iters = None - else: - self.warmup_epochs = None - - self.base_lr = [] # initial lr for all param groups - self.regular_lr = [] # expected lr if no warming up is performed - - def _set_lr(self, runner, lr_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, lr in zip(optim.param_groups, lr_groups[k]): - param_group['lr'] = lr - else: - for param_group, lr in zip(runner.optimizer.param_groups, - lr_groups): - param_group['lr'] = lr - - def get_lr(self, runner, base_lr): - raise NotImplementedError - - def get_regular_lr(self, runner): - if isinstance(runner.optimizer, dict): - lr_groups = {} - for k in runner.optimizer.keys(): - _lr_group = [ - self.get_lr(runner, _base_lr) - for _base_lr in self.base_lr[k] - ] - lr_groups.update({k: _lr_group}) - - return lr_groups - else: - return [self.get_lr(runner, _base_lr) for _base_lr in self.base_lr] - - def get_warmup_lr(self, cur_iters): - - def _get_warmup_lr(cur_iters, regular_lr): - if self.warmup == 'constant': - warmup_lr = [_lr * self.warmup_ratio for _lr in regular_lr] - elif self.warmup == 'linear': - k = (1 - cur_iters / self.warmup_iters) * (1 - - self.warmup_ratio) - warmup_lr = [_lr * (1 - k) for _lr in regular_lr] - elif self.warmup == 'exp': - k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters) - warmup_lr = [_lr * k for _lr in regular_lr] - return warmup_lr - - if isinstance(self.regular_lr, dict): - lr_groups = {} - for key, regular_lr in self.regular_lr.items(): - lr_groups[key] = _get_warmup_lr(cur_iters, regular_lr) - return lr_groups - else: - return _get_warmup_lr(cur_iters, self.regular_lr) - - def before_run(self, runner): - # NOTE: when resuming from a checkpoint, if 'initial_lr' is not saved, - # it will be set according to the optimizer params - if isinstance(runner.optimizer, dict): - self.base_lr = {} - for k, optim in runner.optimizer.items(): - for group in optim.param_groups: - group.setdefault('initial_lr', group['lr']) - _base_lr = [ - group['initial_lr'] for group in optim.param_groups - ] - self.base_lr.update({k: _base_lr}) - else: - for group in runner.optimizer.param_groups: - group.setdefault('initial_lr', group['lr']) - self.base_lr = [ - group['initial_lr'] for group in runner.optimizer.param_groups - ] - - def before_train_epoch(self, runner): - if self.warmup_iters is None: - epoch_len = len(runner.data_loader) - self.warmup_iters = self.warmup_epochs * epoch_len - - if not self.by_epoch: - return - - self.regular_lr = self.get_regular_lr(runner) - self._set_lr(runner, self.regular_lr) - - def before_train_iter(self, runner): - cur_iter = runner.iter - if not self.by_epoch: - self.regular_lr = self.get_regular_lr(runner) - if self.warmup is None or cur_iter >= self.warmup_iters: - self._set_lr(runner, self.regular_lr) - else: - warmup_lr = self.get_warmup_lr(cur_iter) - self._set_lr(runner, warmup_lr) - elif self.by_epoch: - if self.warmup is None or cur_iter > self.warmup_iters: - return - elif cur_iter == self.warmup_iters: - self._set_lr(runner, self.regular_lr) - else: - warmup_lr = self.get_warmup_lr(cur_iter) - self._set_lr(runner, warmup_lr) - - -@HOOKS.register_module() -class FixedLrUpdaterHook(LrUpdaterHook): - - def __init__(self, **kwargs): - super(FixedLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - return base_lr - - -@HOOKS.register_module() -class StepLrUpdaterHook(LrUpdaterHook): - """Step LR scheduler with min_lr clipping. - - Args: - step (int | list[int]): Step to decay the LR. If an int value is given, - regard it as the decay interval. If a list is given, decay LR at - these steps. - gamma (float, optional): Decay LR ratio. Default: 0.1. - min_lr (float, optional): Minimum LR value to keep. If LR after decay - is lower than `min_lr`, it will be clipped to this value. If None - is given, we don't perform lr clipping. Default: None. - """ - - def __init__(self, step, gamma=0.1, min_lr=None, **kwargs): - if isinstance(step, list): - assert mmcv.is_list_of(step, int) - assert all([s > 0 for s in step]) - elif isinstance(step, int): - assert step > 0 - else: - raise TypeError('"step" must be a list or integer') - self.step = step - self.gamma = gamma - self.min_lr = min_lr - super(StepLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - - # calculate exponential term - if isinstance(self.step, int): - exp = progress // self.step - else: - exp = len(self.step) - for i, s in enumerate(self.step): - if progress < s: - exp = i - break - - lr = base_lr * (self.gamma**exp) - if self.min_lr is not None: - # clip to a minimum value - lr = max(lr, self.min_lr) - return lr - - -@HOOKS.register_module() -class ExpLrUpdaterHook(LrUpdaterHook): - - def __init__(self, gamma, **kwargs): - self.gamma = gamma - super(ExpLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - return base_lr * self.gamma**progress - - -@HOOKS.register_module() -class PolyLrUpdaterHook(LrUpdaterHook): - - def __init__(self, power=1., min_lr=0., **kwargs): - self.power = power - self.min_lr = min_lr - super(PolyLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - coeff = (1 - progress / max_progress)**self.power - return (base_lr - self.min_lr) * coeff + self.min_lr - - -@HOOKS.register_module() -class InvLrUpdaterHook(LrUpdaterHook): - - def __init__(self, gamma, power=1., **kwargs): - self.gamma = gamma - self.power = power - super(InvLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - return base_lr * (1 + self.gamma * progress)**(-self.power) - - -@HOOKS.register_module() -class CosineAnnealingLrUpdaterHook(LrUpdaterHook): - - def __init__(self, min_lr=None, min_lr_ratio=None, **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - super(CosineAnnealingLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - return annealing_cos(base_lr, target_lr, progress / max_progress) - - -@HOOKS.register_module() -class FlatCosineAnnealingLrUpdaterHook(LrUpdaterHook): - """Flat + Cosine lr schedule. - - Modified from https://github.com/fastai/fastai/blob/master/fastai/callback/schedule.py#L128 # noqa: E501 - - Args: - start_percent (float): When to start annealing the learning rate - after the percentage of the total training steps. - The value should be in range [0, 1). - Default: 0.75 - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - start_percent=0.75, - min_lr=None, - min_lr_ratio=None, - **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - if start_percent < 0 or start_percent > 1 or not isinstance( - start_percent, float): - raise ValueError( - 'expected float between 0 and 1 start_percent, but ' - f'got {start_percent}') - self.start_percent = start_percent - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - super(FlatCosineAnnealingLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - start = round(runner.max_epochs * self.start_percent) - progress = runner.epoch - start - max_progress = runner.max_epochs - start - else: - start = round(runner.max_iters * self.start_percent) - progress = runner.iter - start - max_progress = runner.max_iters - start - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - - if progress < 0: - return base_lr - else: - return annealing_cos(base_lr, target_lr, progress / max_progress) - - -@HOOKS.register_module() -class CosineRestartLrUpdaterHook(LrUpdaterHook): - """Cosine annealing with restarts learning rate scheme. - - Args: - periods (list[int]): Periods for each cosine anneling cycle. - restart_weights (list[float], optional): Restart weights at each - restart iteration. Default: [1]. - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - periods, - restart_weights=[1], - min_lr=None, - min_lr_ratio=None, - **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - self.periods = periods - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - self.restart_weights = restart_weights - assert (len(self.periods) == len(self.restart_weights) - ), 'periods and restart_weights should have the same length.' - super(CosineRestartLrUpdaterHook, self).__init__(**kwargs) - - self.cumulative_periods = [ - sum(self.periods[0:i + 1]) for i in range(0, len(self.periods)) - ] - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - else: - progress = runner.iter - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - - idx = get_position_from_periods(progress, self.cumulative_periods) - current_weight = self.restart_weights[idx] - nearest_restart = 0 if idx == 0 else self.cumulative_periods[idx - 1] - current_periods = self.periods[idx] - - alpha = min((progress - nearest_restart) / current_periods, 1) - return annealing_cos(base_lr, target_lr, alpha, current_weight) - - -def get_position_from_periods(iteration, cumulative_periods): - """Get the position from a period list. - - It will return the index of the right-closest number in the period list. - For example, the cumulative_periods = [100, 200, 300, 400], - if iteration == 50, return 0; - if iteration == 210, return 2; - if iteration == 300, return 3. - - Args: - iteration (int): Current iteration. - cumulative_periods (list[int]): Cumulative period list. - - Returns: - int: The position of the right-closest number in the period list. - """ - for i, period in enumerate(cumulative_periods): - if iteration < period: - return i - raise ValueError(f'Current iteration {iteration} exceeds ' - f'cumulative_periods {cumulative_periods}') - - -@HOOKS.register_module() -class CyclicLrUpdaterHook(LrUpdaterHook): - """Cyclic LR Scheduler. - - Implement the cyclical learning rate policy (CLR) described in - https://arxiv.org/pdf/1506.01186.pdf - - Different from the original paper, we use cosine annealing rather than - triangular policy inside a cycle. This improves the performance in the - 3D detection area. - - Args: - by_epoch (bool): Whether to update LR by epoch. - target_ratio (tuple[float]): Relative ratio of the highest LR and the - lowest LR to the initial LR. - cyclic_times (int): Number of cycles during training - step_ratio_up (float): The ratio of the increasing process of LR in - the total cycle. - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. Default: 'cos'. - """ - - def __init__(self, - by_epoch=False, - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4, - anneal_strategy='cos', - **kwargs): - if isinstance(target_ratio, float): - target_ratio = (target_ratio, target_ratio / 1e5) - elif isinstance(target_ratio, tuple): - target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \ - if len(target_ratio) == 1 else target_ratio - else: - raise ValueError('target_ratio should be either float ' - f'or tuple, got {type(target_ratio)}') - - assert len(target_ratio) == 2, \ - '"target_ratio" must be list or tuple of two floats' - assert 0 <= step_ratio_up < 1.0, \ - '"step_ratio_up" must be in range [0,1)' - - self.target_ratio = target_ratio - self.cyclic_times = cyclic_times - self.step_ratio_up = step_ratio_up - self.lr_phases = [] # init lr_phases - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must be one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - - assert not by_epoch, \ - 'currently only support "by_epoch" = False' - super(CyclicLrUpdaterHook, self).__init__(by_epoch, **kwargs) - - def before_run(self, runner): - super(CyclicLrUpdaterHook, self).before_run(runner) - # initiate lr_phases - # total lr_phases are separated as up and down - max_iter_per_phase = runner.max_iters // self.cyclic_times - iter_up_phase = int(self.step_ratio_up * max_iter_per_phase) - self.lr_phases.append( - [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]]) - self.lr_phases.append([ - iter_up_phase, max_iter_per_phase, max_iter_per_phase, - self.target_ratio[0], self.target_ratio[1] - ]) - - def get_lr(self, runner, base_lr): - curr_iter = runner.iter - for (start_iter, end_iter, max_iter_per_phase, start_ratio, - end_ratio) in self.lr_phases: - curr_iter %= max_iter_per_phase - if start_iter <= curr_iter < end_iter: - progress = curr_iter - start_iter - return self.anneal_func(base_lr * start_ratio, - base_lr * end_ratio, - progress / (end_iter - start_iter)) - - -@HOOKS.register_module() -class OneCycleLrUpdaterHook(LrUpdaterHook): - """One Cycle LR Scheduler. - - The 1cycle learning rate policy changes the learning rate after every - batch. The one cycle learning rate policy is described in - https://arxiv.org/pdf/1708.07120.pdf - - Args: - max_lr (float or list): Upper learning rate boundaries in the cycle - for each parameter group. - total_steps (int, optional): The total number of steps in the cycle. - Note that if a value is not provided here, it will be the max_iter - of runner. Default: None. - pct_start (float): The percentage of the cycle (in number of steps) - spent increasing the learning rate. - Default: 0.3 - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. - Default: 'cos' - div_factor (float): Determines the initial learning rate via - initial_lr = max_lr/div_factor - Default: 25 - final_div_factor (float): Determines the minimum learning rate via - min_lr = initial_lr/final_div_factor - Default: 1e4 - three_phase (bool): If three_phase is True, use a third phase of the - schedule to annihilate the learning rate according to - final_div_factor instead of modifying the second phase (the first - two phases will be symmetrical about the step indicated by - pct_start). - Default: False - """ - - def __init__(self, - max_lr, - total_steps=None, - pct_start=0.3, - anneal_strategy='cos', - div_factor=25, - final_div_factor=1e4, - three_phase=False, - **kwargs): - # validate by_epoch, currently only support by_epoch = False - if 'by_epoch' not in kwargs: - kwargs['by_epoch'] = False - else: - assert not kwargs['by_epoch'], \ - 'currently only support "by_epoch" = False' - if not isinstance(max_lr, (numbers.Number, list, dict)): - raise ValueError('the type of max_lr must be the one of list or ' - f'dict, but got {type(max_lr)}') - self._max_lr = max_lr - if total_steps is not None: - if not isinstance(total_steps, int): - raise ValueError('the type of total_steps must be int, but' - f'got {type(total_steps)}') - self.total_steps = total_steps - # validate pct_start - if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float): - raise ValueError('expected float between 0 and 1 pct_start, but ' - f'got {pct_start}') - self.pct_start = pct_start - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must be one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - self.div_factor = div_factor - self.final_div_factor = final_div_factor - self.three_phase = three_phase - self.lr_phases = [] # init lr_phases - super(OneCycleLrUpdaterHook, self).__init__(**kwargs) - - def before_run(self, runner): - if hasattr(self, 'total_steps'): - total_steps = self.total_steps - else: - total_steps = runner.max_iters - if total_steps < runner.max_iters: - raise ValueError( - 'The total steps must be greater than or equal to max ' - f'iterations {runner.max_iters} of runner, but total steps ' - f'is {total_steps}.') - - if isinstance(runner.optimizer, dict): - self.base_lr = {} - for k, optim in runner.optimizer.items(): - _max_lr = format_param(k, optim, self._max_lr) - self.base_lr[k] = [lr / self.div_factor for lr in _max_lr] - for group, lr in zip(optim.param_groups, self.base_lr[k]): - group.setdefault('initial_lr', lr) - else: - k = type(runner.optimizer).__name__ - _max_lr = format_param(k, runner.optimizer, self._max_lr) - self.base_lr = [lr / self.div_factor for lr in _max_lr] - for group, lr in zip(runner.optimizer.param_groups, self.base_lr): - group.setdefault('initial_lr', lr) - - if self.three_phase: - self.lr_phases.append( - [float(self.pct_start * total_steps) - 1, 1, self.div_factor]) - self.lr_phases.append([ - float(2 * self.pct_start * total_steps) - 2, self.div_factor, 1 - ]) - self.lr_phases.append( - [total_steps - 1, 1, 1 / self.final_div_factor]) - else: - self.lr_phases.append( - [float(self.pct_start * total_steps) - 1, 1, self.div_factor]) - self.lr_phases.append( - [total_steps - 1, self.div_factor, 1 / self.final_div_factor]) - - def get_lr(self, runner, base_lr): - curr_iter = runner.iter - start_iter = 0 - for i, (end_iter, start_lr, end_lr) in enumerate(self.lr_phases): - if curr_iter <= end_iter: - pct = (curr_iter - start_iter) / (end_iter - start_iter) - lr = self.anneal_func(base_lr * start_lr, base_lr * end_lr, - pct) - break - start_iter = end_iter - return lr - - -def annealing_cos(start, end, factor, weight=1): - """Calculate annealing cos learning rate. - - Cosine anneal from `weight * start + (1 - weight) * end` to `end` as - percentage goes from 0.0 to 1.0. - - Args: - start (float): The starting learning rate of the cosine annealing. - end (float): The ending learing rate of the cosine annealing. - factor (float): The coefficient of `pi` when calculating the current - percentage. Range from 0.0 to 1.0. - weight (float, optional): The combination factor of `start` and `end` - when calculating the actual starting learning rate. Default to 1. - """ - cos_out = cos(pi * factor) + 1 - return end + 0.5 * weight * (start - end) * cos_out - - -def annealing_linear(start, end, factor): - """Calculate annealing linear learning rate. - - Linear anneal from `start` to `end` as percentage goes from 0.0 to 1.0. - - Args: - start (float): The starting learning rate of the linear annealing. - end (float): The ending learing rate of the linear annealing. - factor (float): The coefficient of `pi` when calculating the current - percentage. Range from 0.0 to 1.0. - """ - return start + (end - start) * factor - - -def format_param(name, optim, param): - if isinstance(param, numbers.Number): - return [param] * len(optim.param_groups) - elif isinstance(param, (list, tuple)): # multi param groups - if len(param) != len(optim.param_groups): - raise ValueError(f'expected {len(optim.param_groups)} ' - f'values for {name}, got {len(param)}') - return param - else: # multi optimizers - if name not in param: - raise KeyError(f'{name} is not found in {param.keys()}') - return param[name] diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/dataset_wrappers.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/dataset_wrappers.py deleted file mode 100644 index 55ad5cb60e581a96bdbd1fbbeebc2f46f8c4e899..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/dataset_wrappers.py +++ /dev/null @@ -1,282 +0,0 @@ -import bisect -import math -from collections import defaultdict - -import numpy as np -from mmcv.utils import print_log -from torch.utils.data.dataset import ConcatDataset as _ConcatDataset - -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class ConcatDataset(_ConcatDataset): - """A wrapper of concatenated dataset. - - Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but - concat the group flag for image aspect ratio. - - Args: - datasets (list[:obj:`Dataset`]): A list of datasets. - separate_eval (bool): Whether to evaluate the results - separately if it is used as validation dataset. - Defaults to True. - """ - - def __init__(self, datasets, separate_eval=True): - super(ConcatDataset, self).__init__(datasets) - self.CLASSES = datasets[0].CLASSES - self.separate_eval = separate_eval - if not separate_eval: - if any([isinstance(ds, CocoDataset) for ds in datasets]): - raise NotImplementedError( - 'Evaluating concatenated CocoDataset as a whole is not' - ' supported! Please set "separate_eval=True"') - elif len(set([type(ds) for ds in datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types') - - if hasattr(datasets[0], 'flag'): - flags = [] - for i in range(0, len(datasets)): - flags.append(datasets[i].flag) - self.flag = np.concatenate(flags) - - def get_cat_ids(self, idx): - """Get category ids of concatenated dataset by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - if idx < 0: - if -idx > len(self): - raise ValueError( - 'absolute value of index should not exceed dataset length') - idx = len(self) + idx - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - return self.datasets[dataset_idx].get_cat_ids(sample_idx) - - def evaluate(self, results, logger=None, **kwargs): - """Evaluate the results. - - Args: - results (list[list | tuple]): Testing results of the dataset. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: AP results of the total dataset or each separate - dataset if `self.separate_eval=True`. - """ - assert len(results) == self.cumulative_sizes[-1], \ - ('Dataset and results have different sizes: ' - f'{self.cumulative_sizes[-1]} v.s. {len(results)}') - - # Check whether all the datasets support evaluation - for dataset in self.datasets: - assert hasattr(dataset, 'evaluate'), \ - f'{type(dataset)} does not implement evaluate function' - - if self.separate_eval: - dataset_idx = -1 - total_eval_results = dict() - for size, dataset in zip(self.cumulative_sizes, self.datasets): - start_idx = 0 if dataset_idx == -1 else \ - self.cumulative_sizes[dataset_idx] - end_idx = self.cumulative_sizes[dataset_idx + 1] - - results_per_dataset = results[start_idx:end_idx] - print_log( - f'\nEvaluateing {dataset.ann_file} with ' - f'{len(results_per_dataset)} images now', - logger=logger) - - eval_results_per_dataset = dataset.evaluate( - results_per_dataset, logger=logger, **kwargs) - dataset_idx += 1 - for k, v in eval_results_per_dataset.items(): - total_eval_results.update({f'{dataset_idx}_{k}': v}) - - return total_eval_results - elif any([isinstance(ds, CocoDataset) for ds in self.datasets]): - raise NotImplementedError( - 'Evaluating concatenated CocoDataset as a whole is not' - ' supported! Please set "separate_eval=True"') - elif len(set([type(ds) for ds in self.datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types') - else: - original_data_infos = self.datasets[0].data_infos - self.datasets[0].data_infos = sum( - [dataset.data_infos for dataset in self.datasets], []) - eval_results = self.datasets[0].evaluate( - results, logger=logger, **kwargs) - self.datasets[0].data_infos = original_data_infos - return eval_results - - -@DATASETS.register_module() -class RepeatDataset(object): - """A wrapper of repeated dataset. - - The length of repeated dataset will be `times` larger than the original - dataset. This is useful when the data loading time is long but the dataset - is small. Using RepeatDataset can reduce the data loading time between - epochs. - - Args: - dataset (:obj:`Dataset`): The dataset to be repeated. - times (int): Repeat times. - """ - - def __init__(self, dataset, times): - self.dataset = dataset - self.times = times - self.CLASSES = dataset.CLASSES - if hasattr(self.dataset, 'flag'): - self.flag = np.tile(self.dataset.flag, times) - - self._ori_len = len(self.dataset) - - def __getitem__(self, idx): - return self.dataset[idx % self._ori_len] - - def get_cat_ids(self, idx): - """Get category ids of repeat dataset by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - return self.dataset.get_cat_ids(idx % self._ori_len) - - def __len__(self): - """Length after repetition.""" - return self.times * self._ori_len - - -# Modified from https://github.com/facebookresearch/detectron2/blob/41d475b75a230221e21d9cac5d69655e3415e3a4/detectron2/data/samplers/distributed_sampler.py#L57 # noqa -@DATASETS.register_module() -class ClassBalancedDataset(object): - """A wrapper of repeated dataset with repeat factor. - - Suitable for training on class imbalanced datasets like LVIS. Following - the sampling strategy in the `paper `_, - in each epoch, an image may appear multiple times based on its - "repeat factor". - The repeat factor for an image is a function of the frequency the rarest - category labeled in that image. The "frequency of category c" in [0, 1] - is defined by the fraction of images in the training set (without repeats) - in which category c appears. - The dataset needs to instantiate :func:`self.get_cat_ids` to support - ClassBalancedDataset. - - The repeat factor is computed as followed. - - 1. For each category c, compute the fraction # of images - that contain it: :math:`f(c)` - 2. For each category c, compute the category-level repeat factor: - :math:`r(c) = max(1, sqrt(t/f(c)))` - 3. For each image I, compute the image-level repeat factor: - :math:`r(I) = max_{c in I} r(c)` - - Args: - dataset (:obj:`CustomDataset`): The dataset to be repeated. - oversample_thr (float): frequency threshold below which data is - repeated. For categories with ``f_c >= oversample_thr``, there is - no oversampling. For categories with ``f_c < oversample_thr``, the - degree of oversampling following the square-root inverse frequency - heuristic above. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes will not be oversampled. Otherwise, they will be categorized - as the pure background class and involved into the oversampling. - Default: True. - """ - - def __init__(self, dataset, oversample_thr, filter_empty_gt=True): - self.dataset = dataset - self.oversample_thr = oversample_thr - self.filter_empty_gt = filter_empty_gt - self.CLASSES = dataset.CLASSES - - repeat_factors = self._get_repeat_factors(dataset, oversample_thr) - repeat_indices = [] - for dataset_idx, repeat_factor in enumerate(repeat_factors): - repeat_indices.extend([dataset_idx] * math.ceil(repeat_factor)) - self.repeat_indices = repeat_indices - - flags = [] - if hasattr(self.dataset, 'flag'): - for flag, repeat_factor in zip(self.dataset.flag, repeat_factors): - flags.extend([flag] * int(math.ceil(repeat_factor))) - assert len(flags) == len(repeat_indices) - self.flag = np.asarray(flags, dtype=np.uint8) - - def _get_repeat_factors(self, dataset, repeat_thr): - """Get repeat factor for each images in the dataset. - - Args: - dataset (:obj:`CustomDataset`): The dataset - repeat_thr (float): The threshold of frequency. If an image - contains the categories whose frequency below the threshold, - it would be repeated. - - Returns: - list[float]: The repeat factors for each images in the dataset. - """ - - # 1. For each category c, compute the fraction # of images - # that contain it: f(c) - category_freq = defaultdict(int) - num_images = len(dataset) - for idx in range(num_images): - cat_ids = set(self.dataset.get_cat_ids(idx)) - if len(cat_ids) == 0 and not self.filter_empty_gt: - cat_ids = set([len(self.CLASSES)]) - for cat_id in cat_ids: - category_freq[cat_id] += 1 - for k, v in category_freq.items(): - category_freq[k] = v / num_images - - # 2. For each category c, compute the category-level repeat factor: - # r(c) = max(1, sqrt(t/f(c))) - category_repeat = { - cat_id: max(1.0, math.sqrt(repeat_thr / cat_freq)) - for cat_id, cat_freq in category_freq.items() - } - - # 3. For each image I, compute the image-level repeat factor: - # r(I) = max_{c in I} r(c) - repeat_factors = [] - for idx in range(num_images): - cat_ids = set(self.dataset.get_cat_ids(idx)) - if len(cat_ids) == 0 and not self.filter_empty_gt: - cat_ids = set([len(self.CLASSES)]) - repeat_factor = 1 - if len(cat_ids) > 0: - repeat_factor = max( - {category_repeat[cat_id] - for cat_id in cat_ids}) - repeat_factors.append(repeat_factor) - - return repeat_factors - - def __getitem__(self, idx): - ori_index = self.repeat_indices[idx] - return self.dataset[ori_index] - - def __len__(self): - """Length after repetition.""" - return len(self.repeat_indices) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/swin_transformer.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/swin_transformer.py deleted file mode 100644 index bb41850d8480a08a6a7698bf6129ffd1ab239681..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/swin_transformer.py +++ /dev/null @@ -1,630 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# Copyright (c) 2021 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ze Liu, Yutong Lin, Yixuan Wei -# -------------------------------------------------------- - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from mmcv_custom import load_checkpoint -from mmdet.utils import get_root_logger -from ..builder import BACKBONES - - -class Mlp(nn.Module): - """ Multilayer perceptron.""" - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """ Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ Forward function. - - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SwinTransformerBlock(nn.Module): - """ Swin Transformer Block. - - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """ Forward function. - - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - -class PatchMerging(nn.Module): - """ Patch Merging Layer - - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """ Forward function. - - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop=0., - attn_drop=0., - drop_path=0., - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """ Forward function. - - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -@BACKBONES.register_module() -class SwinTransformer(nn.Module): - """ Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - use_checkpoint=False): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]] - - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(embed_dim * 2 ** i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint) - self.layers.append(layer) - - num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f'norm{i_layer}' - self.add_module(layer_name, layer) - - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - - def _init_weights(m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - if isinstance(pretrained, str): - self.apply(_init_weights) - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - self.apply(_init_weights) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic') - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = [] - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f'norm{i}') - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs.append(out) - - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/embedding_rpn_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/embedding_rpn_head.py deleted file mode 100644 index 200ce8d20c5503f98c5c21f30bb9d00437e25f34..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/embedding_rpn_head.py +++ /dev/null @@ -1,100 +0,0 @@ -import torch -import torch.nn as nn - -from mmdet.models.builder import HEADS -from ...core import bbox_cxcywh_to_xyxy - - -@HEADS.register_module() -class EmbeddingRPNHead(nn.Module): - """RPNHead in the `Sparse R-CNN `_ . - - Unlike traditional RPNHead, this module does not need FPN input, but just - decode `init_proposal_bboxes` and expand the first dimension of - `init_proposal_bboxes` and `init_proposal_features` to the batch_size. - - Args: - num_proposals (int): Number of init_proposals. Default 100. - proposal_feature_channel (int): Channel number of - init_proposal_feature. Defaults to 256. - """ - - def __init__(self, - num_proposals=100, - proposal_feature_channel=256, - **kwargs): - super(EmbeddingRPNHead, self).__init__() - self.num_proposals = num_proposals - self.proposal_feature_channel = proposal_feature_channel - self._init_layers() - - def _init_layers(self): - """Initialize a sparse set of proposal boxes and proposal features.""" - self.init_proposal_bboxes = nn.Embedding(self.num_proposals, 4) - self.init_proposal_features = nn.Embedding( - self.num_proposals, self.proposal_feature_channel) - - def init_weights(self): - """Initialize the init_proposal_bboxes as normalized. - - [c_x, c_y, w, h], and we initialize it to the size of the entire - image. - """ - nn.init.constant_(self.init_proposal_bboxes.weight[:, :2], 0.5) - nn.init.constant_(self.init_proposal_bboxes.weight[:, 2:], 1) - - def _decode_init_proposals(self, imgs, img_metas): - """Decode init_proposal_bboxes according to the size of images and - expand dimension of init_proposal_features to batch_size. - - Args: - imgs (list[Tensor]): List of FPN features. - img_metas (list[dict]): List of meta-information of - images. Need the img_shape to decode the init_proposals. - - Returns: - Tuple(Tensor): - - - proposals (Tensor): Decoded proposal bboxes, - has shape (batch_size, num_proposals, 4). - - init_proposal_features (Tensor): Expanded proposal - features, has shape - (batch_size, num_proposals, proposal_feature_channel). - - imgs_whwh (Tensor): Tensor with shape - (batch_size, 4), the dimension means - [img_width, img_height, img_width, img_height]. - """ - proposals = self.init_proposal_bboxes.weight.clone() - proposals = bbox_cxcywh_to_xyxy(proposals) - num_imgs = len(imgs[0]) - imgs_whwh = [] - for meta in img_metas: - h, w, _ = meta['img_shape'] - imgs_whwh.append(imgs[0].new_tensor([[w, h, w, h]])) - imgs_whwh = torch.cat(imgs_whwh, dim=0) - imgs_whwh = imgs_whwh[:, None, :] - - # imgs_whwh has shape (batch_size, 1, 4) - # The shape of proposals change from (num_proposals, 4) - # to (batch_size ,num_proposals, 4) - proposals = proposals * imgs_whwh - - init_proposal_features = self.init_proposal_features.weight.clone() - init_proposal_features = init_proposal_features[None].expand( - num_imgs, *init_proposal_features.size()) - return proposals, init_proposal_features, imgs_whwh - - def forward_dummy(self, img, img_metas): - """Dummy forward function. - - Used in flops calculation. - """ - return self._decode_init_proposals(img, img_metas) - - def forward_train(self, img, img_metas): - """Forward function in training stage.""" - return self._decode_init_proposals(img, img_metas) - - def simple_test_rpn(self, img, img_metas): - """Forward function in testing stage.""" - return self._decode_init_proposals(img, img_metas) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/__init__.py deleted file mode 100644 index ca0a38ec42cd41fbd97e07589a13d1af46f47f2f..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -from .base_roi_head import BaseRoIHead -from .bbox_heads import (BBoxHead, ConvFCBBoxHead, DoubleConvFCBBoxHead, - SCNetBBoxHead, Shared2FCBBoxHead, - Shared4Conv1FCBBoxHead) -from .cascade_roi_head import CascadeRoIHead -from .double_roi_head import DoubleHeadRoIHead -from .dynamic_roi_head import DynamicRoIHead -from .grid_roi_head import GridRoIHead -from .htc_roi_head import HybridTaskCascadeRoIHead -from .mask_heads import (CoarseMaskHead, FCNMaskHead, FeatureRelayHead, - FusedSemanticHead, GlobalContextHead, GridHead, - HTCMaskHead, MaskIoUHead, MaskPointHead, - SCNetMaskHead, SCNetSemanticHead) -from .mask_scoring_roi_head import MaskScoringRoIHead -from .pisa_roi_head import PISARoIHead -from .point_rend_roi_head import PointRendRoIHead -from .roi_extractors import SingleRoIExtractor -from .scnet_roi_head import SCNetRoIHead -from .shared_heads import ResLayer -from .sparse_roi_head import SparseRoIHead -from .standard_roi_head import StandardRoIHead -from .trident_roi_head import TridentRoIHead - -__all__ = [ - 'BaseRoIHead', 'CascadeRoIHead', 'DoubleHeadRoIHead', 'MaskScoringRoIHead', - 'HybridTaskCascadeRoIHead', 'GridRoIHead', 'ResLayer', 'BBoxHead', - 'ConvFCBBoxHead', 'Shared2FCBBoxHead', 'StandardRoIHead', - 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', 'FCNMaskHead', - 'HTCMaskHead', 'FusedSemanticHead', 'GridHead', 'MaskIoUHead', - 'SingleRoIExtractor', 'PISARoIHead', 'PointRendRoIHead', 'MaskPointHead', - 'CoarseMaskHead', 'DynamicRoIHead', 'SparseRoIHead', 'TridentRoIHead', - 'SCNetRoIHead', 'SCNetMaskHead', 'SCNetSemanticHead', 'SCNetBBoxHead', - 'FeatureRelayHead', 'GlobalContextHead' -] diff --git a/spaces/Sandiago21/text-to-speech-german/app.py b/spaces/Sandiago21/text-to-speech-german/app.py deleted file mode 100644 index d59abbad6c1f31d9f026f8151e32987e913ec18f..0000000000000000000000000000000000000000 --- a/spaces/Sandiago21/text-to-speech-german/app.py +++ /dev/null @@ -1,107 +0,0 @@ -import gradio as gr -import torch -from datasets import load_dataset -from transformers import pipeline, SpeechT5Processor, SpeechT5HifiGan, SpeechT5ForTextToSpeech - -model_id = "Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german" # update with your model id -model = SpeechT5ForTextToSpeech.from_pretrained(model_id) -vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan") -embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") -speaker_embeddings = torch.tensor(embeddings_dataset[7440]["xvector"]).unsqueeze(0) - -processor = SpeechT5Processor.from_pretrained(model_id) - -replacements = [ - ("Ä", "E"), - ("Æ", "E"), - ("Ç", "C"), - ("É", "E"), - ("Í", "I"), - ("Ó", "O"), - ("Ö", "E"), - ("Ü", "Y"), - ("ß", "S"), - ("à", "a"), - ("á", "a"), - ("ã", "a"), - ("ä", "e"), - ("å", "a"), - ("ë", "e"), - ("í", "i"), - ("ï", "i"), - ("ð", "o"), - ("ñ", "n"), - ("ò", "o"), - ("ó", "o"), - ("ô", "o"), - ("ö", "u"), - ("ú", "u"), - ("ü", "y"), - ("ý", "y"), - ("Ā", "A"), - ("ā", "a"), - ("ă", "a"), - ("ą", "a"), - ("ć", "c"), - ("Č", "C"), - ("č", "c"), - ("ď", "d"), - ("Đ", "D"), - ("ę", "e"), - ("ě", "e"), - ("ğ", "g"), - ("İ", "I"), - ("О", "O"), - ("Ł", "L"), - ("ń", "n"), - ("ň", "n"), - ("Ō", "O"), - ("ō", "o"), - ("ő", "o"), - ("ř", "r"), - ("Ś", "S"), - ("ś", "s"), - ("Ş", "S"), - ("ş", "s"), - ("Š", "S"), - ("š", "s"), - ("ū", "u"), - ("ź", "z"), - ("Ż", "Z"), - ("Ž", "Z"), - ("ǐ", "i"), - ("ǐ", "i"), - ("ș", "s"), - ("ț", "t"), -] - - -title = "Text-to-Speech" -description = """ -Demo for text-to-speech translation in German. Demo uses [Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german](https://huggingface.co/Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german) checkpoint, which is based on Microsoft's -[SpeechT5 TTS](https://huggingface.co/microsoft/speecht5_tts) model and is fine-tuned in German Audio dataset -![Text-to-Speech (TTS)"](https://geekflare.com/wp-content/uploads/2021/07/texttospeech-1200x385.png "Diagram of Text-to-Speech (TTS)") -""" - - -def cleanup_text(text): - for src, dst in replacements: - text = text.replace(src, dst) - return text - -def synthesize_speech(text): - text = cleanup_text(text) - inputs = processor(text=text, return_tensors="pt") - - speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder) - - return gr.Audio.update(value=(16000, speech.cpu().numpy())) - -syntesize_speech_gradio = gr.Interface( - synthesize_speech, - inputs = gr.Textbox(label="Text", placeholder="Type something here..."), - outputs=gr.Audio(), - examples=["Daher wird die Reform der Europäischen Sozialfondsverordnung, die wir morgen beschließen, auch umgehend in Kraft treten."], - title=title, - description=description, -).launch() diff --git a/spaces/Sapiensia/diffuse-the-rest/svelte.config.js b/spaces/Sapiensia/diffuse-the-rest/svelte.config.js deleted file mode 100644 index 39e5f7c03b9e9e26cf8c88ff11a15a3bb45b1534..0000000000000000000000000000000000000000 --- a/spaces/Sapiensia/diffuse-the-rest/svelte.config.js +++ /dev/null @@ -1,22 +0,0 @@ -import { mdsvex } from 'mdsvex'; -import mdsvexConfig from './mdsvex.config.js'; -import adapter from '@sveltejs/adapter-static'; -import preprocess from 'svelte-preprocess'; - -/** @type {import('@sveltejs/kit').Config} */ -const config = { - extensions: ['.svelte', ...mdsvexConfig.extensions], - - // Consult https://github.com/sveltejs/svelte-preprocess - // for more information about preprocessors - preprocess: [preprocess(), mdsvex(mdsvexConfig)], - - kit: { - adapter: adapter(), - prerender: { - default: true - } - } -}; - -export default config; diff --git a/spaces/Sarath2002/Form_Understanding_using_LayoutLMV3/support.py b/spaces/Sarath2002/Form_Understanding_using_LayoutLMV3/support.py deleted file mode 100644 index d3d1dad6224f605ed96190a4f8b498d772b7eb35..0000000000000000000000000000000000000000 --- a/spaces/Sarath2002/Form_Understanding_using_LayoutLMV3/support.py +++ /dev/null @@ -1,87 +0,0 @@ -from datasets import load_dataset -import numpy as np -from transformers import LayoutLMv3Processor, LayoutLMv3ForTokenClassification -from datasets import load_dataset -from PIL import Image, ImageDraw, ImageFont -import torch - - - -tokenizer = LayoutLMv3Processor.from_pretrained("microsoft/layoutlmv3-base") -model = LayoutLMv3ForTokenClassification.from_pretrained(r"models") -"""device = torch.device("cuda") -model.cuda() -""" -labels = ['O', 'B-HEADER', 'I-HEADER', 'B-QUESTION', 'I-QUESTION', 'B-ANSWER', 'I-ANSWER'] -id2label = {v: k for v, k in enumerate(labels)} -label2color = { - "question": "blue", - "answer": "green", - "header": "orange", - "other": "violet", -} - - -def unnormalize_box(bbox, width, height): - return [ - width * (bbox[0] / 1000), - height * (bbox[1] / 1000), - width * (bbox[2] / 1000), - height * (bbox[3] / 1000), - ] - - -def iob_to_label(label): - label = label[2:] - if not label: - return "other" - return label - - -def processor(image): - image = image.convert("RGB") - width, height = image.size - - - # encode - encoding = tokenizer( - image, truncation=True, return_offsets_mapping=True, return_tensors="pt" - ) - offset_mapping = encoding.pop("offset_mapping") - - encoding = encoding.to('cuda') - - # forward pass - outputs = model(**encoding) - - # get predictions - predictions = outputs.logits.argmax(-1).squeeze().tolist() - token_boxes = encoding.bbox.squeeze().tolist() - - - # only keep non-subword predictions - is_subword = np.array(offset_mapping.squeeze().tolist())[:, 0] != 0 - true_predictions = [ - id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx] - ] - true_boxes = [ - unnormalize_box(box, width, height) - for idx, box in enumerate(token_boxes) - if not is_subword[idx] - ] - - - - draw = ImageDraw.Draw(image) - font = ImageFont.load_default() - for prediction, box in zip(true_predictions, true_boxes): - predicted_label = iob_to_label(prediction).lower() - draw.rectangle(box, outline=label2color[predicted_label]) - draw.text( - (box[0] + 10, box[1] - 10), - text=predicted_label, - fill=label2color[predicted_label], - font=font, - ) - - return image \ No newline at end of file diff --git a/spaces/Sphila/Sphila-Diffusion/README.md b/spaces/Sphila/Sphila-Diffusion/README.md deleted file mode 100644 index 01f316015f941676c1a03e25496e55f327c43103..0000000000000000000000000000000000000000 --- a/spaces/Sphila/Sphila-Diffusion/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Sphila-Diffusion -emoji: 🔥 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.13.2 -app_file: app.py -pinned: false -license: openrail ---- diff --git a/spaces/SriniJalasuthram/SJ-06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md b/spaces/SriniJalasuthram/SJ-06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md deleted file mode 100644 index c7f042e4c9c0f401731f009842a325e2d1386bf5..0000000000000000000000000000000000000000 --- a/spaces/SriniJalasuthram/SJ-06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md +++ /dev/null @@ -1,51 +0,0 @@ - -# Image Generation for Art, Marketing, Ideation, Design, and Use in Business - -A number of multiple AI pipeline element strategies have evolved on the open market which allow you to generate images using a combination of image prompts and word prompts. This brief analysis gives an idea of the prompting capabilities as well as image rendering techniques that are used in the strategy to generate art from human understanding of images and text used to describe a scene. - -First a top five list on state of the art generators both free and paid is worth consideration. - -1) Midjourney - a Discord server based chatboat AI that allows /imagine prompts which can generate multiple images at a time. This is best at parallel creation, high accuracy even photo real creations. -2) Artbreeder - A multiple capability tool which now features a Collager which assists in starting image composition. By far the most innovative approach which does great to combine the right partial elements in a scene. -3) Dreamstudio - A Huggingface derived art program in beta which uses stable diffusion to create highly accurate art and images. -4) Nightcafe - A credit based creation AI app that can do generation of video dives into an AI art piece which can produce some of the best experiences in Video. -5) RunwayML - a quintessential tool in processing morph audio and video tracks which rival most high end video edit tools. - -These 5 tools make up some of the best AI pipeline programs that are cloud based that allow anyone to begin easily building their portfolio of art. - -The prompting capabilities often involve having a set of text based prompts to get started. Most also feature a starter image which could be an example of what you would like to create. - -URL Links: -1) Collager: https://www.artbreeder.com/beta/collage -2) NightCafe: https://creator.nightcafe.studio/explore -3) Midjourney: https://www.midjourney.com/app/users/779773261440614430/ -4) Dreamstudio: https://beta.dreamstudio.ai/dream -5) RunwayML: https://app.runwayml.com/ - -## Getting Started and Organizing Your AI Pipeline and Process - -Any great strategy has a number of steps that combine all capabilities at your disposal. It is useful to note how you can easily fir these together into a process that works for you. - -The techniques worth noted are listed below. Consider how you will use them will make your pipeline easier and more automated to allow you to spend the majority of your time curating what you have made, and ideating what you want to create next. - -1) Source materials: Since prompting requires text and text examples can quickly help you compose good input, its worth considering and documenting some effective prompts. Nightcafe with its integration into email, sends you a copy of your creation plus the prompting text so one option is to use your email account to keep a record of which prompts work for which outputs. -2) Source materials: Discord since its a public chat format allows you to easily see what others are using for prompts in bulk. There are a number of chat channels designed for people new to the platform and often you can copy and paste if you see very effective prompts with material you are looking for. -3) Source materials: Collager is unique in its ability to add additive parts and then dial in the percent of AI you would like with that. This allows you to add a few image elements which help start out your generation. -4) Source materials: Since images and prompts are going to be your mainstay for inputs its worth considering an open standard for storing and retrieving these from anywhere. Github is a good place since markdown language can involve text in table or list format and includes a capability to reference uploaded images within markdown. This is also a good form for portability since you can later fork and download your repository with a few clicks from anywhere. -5) Source materials: Google drive is integrated into the Artbreeder Collager workflow which allows you easily expand your work and even compose albums of the ones you like to place in Google photo albums. The portfolio you save on different sites have different degrees of ease when aggregating your collections. Collager for instance allows right click save for instant saving of your creation. Dreamstudio features a history. Midjourney features a profile site for you to store and review creations even triggering Upscales which important to use to get the highest resolution output for your creations. - -## Social Media integration - -Depending on your target "safe for work" exports of your work, it is sometimes important to know your accepted social media outlets that you can integrate. Cloud based interactions are the key to successful audiences if you want to scale and share your process with others. - -The key social media outlets supported for these tools are here in a sorted link list which start with public open source first: - -1) Github - Github is open at most companies and allow creation of a free space to share your content. -2) LinkedIn - LinkedIn is acceptable use at nearly every company. -3) Twitter - Twitter is supported as a social media outlet at most companies yet can also be used with security restrictions which might limit posting but allow read access. -4) Facebook - Meta's Facebook is a good outlet since it allows creation of large folios of your images along with stories. This venue however is locked down at many organizations. -5) Instagram - Instagram is supported as an output channel for many tools yet has decreased in popularity due to high frequency of ads and pay for likes models. While it can still be one of the best places for domain specific arrangements of images it is likely locked down in most secure organizations. -6) Youtube - For video uploads with automated captioning and long term storage of short and long form video this is an essential for any creation you compose as video. It is also useful to review and compose playlists of videos here for yourself that speed up your learning - Spend some time at Youtube university and keep a record of keyword searches there sometimes along with your playlists to accelerate learning. -7) Gmail - With the baility to move email in and out its useful to create and wrap up details within email. Most email policies come with a content limitation (for example no files larger than 25MB. For this reason get used to creating pproject wrap up archives with winzip or compression software. With the convenience of keyword searching you can usually use this as a base. -8) Last a worth mention is Huggingface.com. Like github as you become more sophisticated in your public open source capabilities, HuggingFace can allow you to wrap up using one of three software development kits which are gadio, streamlit, and HTML5 each with unique AI and UI integration components and features. If you want to create your own AI pipelines this one also has the open source code and models ready to go to help you on your journey. - diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_extension.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_extension.py deleted file mode 100644 index 24ecf7e97e3e56ea51327cc4704ff1fa749c15aa..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_extension.py +++ /dev/null @@ -1,95 +0,0 @@ -import os.path - -from tempfile import TemporaryDirectory - -import IPython.testing.tools as tt -from IPython.utils.syspathcontext import prepended_to_syspath - -ext1_content = """ -def load_ipython_extension(ip): - print("Running ext1 load") - -def unload_ipython_extension(ip): - print("Running ext1 unload") -""" - -ext2_content = """ -def load_ipython_extension(ip): - print("Running ext2 load") -""" - -ext3_content = """ -def load_ipython_extension(ip): - ip2 = get_ipython() - print(ip is ip2) -""" - -def test_extension_loading(): - em = get_ipython().extension_manager - with TemporaryDirectory() as td: - ext1 = os.path.join(td, "ext1.py") - with open(ext1, "w", encoding="utf-8") as f: - f.write(ext1_content) - - ext2 = os.path.join(td, "ext2.py") - with open(ext2, "w", encoding="utf-8") as f: - f.write(ext2_content) - - with prepended_to_syspath(td): - assert 'ext1' not in em.loaded - assert 'ext2' not in em.loaded - - # Load extension - with tt.AssertPrints("Running ext1 load"): - assert em.load_extension('ext1') is None - assert 'ext1' in em.loaded - - # Should refuse to load it again - with tt.AssertNotPrints("Running ext1 load"): - assert em.load_extension('ext1') == 'already loaded' - - # Reload - with tt.AssertPrints("Running ext1 unload"): - with tt.AssertPrints("Running ext1 load", suppress=False): - em.reload_extension('ext1') - - # Unload - with tt.AssertPrints("Running ext1 unload"): - assert em.unload_extension('ext1') is None - - # Can't unload again - with tt.AssertNotPrints("Running ext1 unload"): - assert em.unload_extension('ext1') == 'not loaded' - assert em.unload_extension('ext2') == 'not loaded' - - # Load extension 2 - with tt.AssertPrints("Running ext2 load"): - assert em.load_extension('ext2') is None - - # Can't unload this - assert em.unload_extension('ext2') == 'no unload function' - - # But can reload it - with tt.AssertPrints("Running ext2 load"): - em.reload_extension('ext2') - - -def test_extension_builtins(): - em = get_ipython().extension_manager - with TemporaryDirectory() as td: - ext3 = os.path.join(td, "ext3.py") - with open(ext3, "w", encoding="utf-8") as f: - f.write(ext3_content) - - assert 'ext3' not in em.loaded - - with prepended_to_syspath(td): - # Load extension - with tt.AssertPrints("True"): - assert em.load_extension('ext3') is None - assert 'ext3' in em.loaded - - -def test_non_extension(): - em = get_ipython().extension_manager - assert em.load_extension("sys") == "no load function" diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/deepreload.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/deepreload.py deleted file mode 100644 index aaedab24255eed6b0213970be6e786d38e1cf900..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/deepreload.py +++ /dev/null @@ -1,310 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Provides a reload() function that acts recursively. - -Python's normal :func:`python:reload` function only reloads the module that it's -passed. The :func:`reload` function in this module also reloads everything -imported from that module, which is useful when you're changing files deep -inside a package. - -To use this as your default reload function, type this:: - - import builtins - from IPython.lib import deepreload - builtins.reload = deepreload.reload - -A reference to the original :func:`python:reload` is stored in this module as -:data:`original_reload`, so you can restore it later. - -This code is almost entirely based on knee.py, which is a Python -re-implementation of hierarchical module import. -""" -#***************************************************************************** -# Copyright (C) 2001 Nathaniel Gray -# -# Distributed under the terms of the BSD License. The full license is in -# the file COPYING, distributed as part of this software. -#***************************************************************************** - -import builtins as builtin_mod -from contextlib import contextmanager -import importlib -import sys - -from types import ModuleType -from warnings import warn -import types - -original_import = builtin_mod.__import__ - -@contextmanager -def replace_import_hook(new_import): - saved_import = builtin_mod.__import__ - builtin_mod.__import__ = new_import - try: - yield - finally: - builtin_mod.__import__ = saved_import - -def get_parent(globals, level): - """ - parent, name = get_parent(globals, level) - - Return the package that an import is being performed in. If globals comes - from the module foo.bar.bat (not itself a package), this returns the - sys.modules entry for foo.bar. If globals is from a package's __init__.py, - the package's entry in sys.modules is returned. - - If globals doesn't come from a package or a module in a package, or a - corresponding entry is not found in sys.modules, None is returned. - """ - orig_level = level - - if not level or not isinstance(globals, dict): - return None, '' - - pkgname = globals.get('__package__', None) - - if pkgname is not None: - # __package__ is set, so use it - if not hasattr(pkgname, 'rindex'): - raise ValueError('__package__ set to non-string') - if len(pkgname) == 0: - if level > 0: - raise ValueError('Attempted relative import in non-package') - return None, '' - name = pkgname - else: - # __package__ not set, so figure it out and set it - if '__name__' not in globals: - return None, '' - modname = globals['__name__'] - - if '__path__' in globals: - # __path__ is set, so modname is already the package name - globals['__package__'] = name = modname - else: - # Normal module, so work out the package name if any - lastdot = modname.rfind('.') - if lastdot < 0 < level: - raise ValueError("Attempted relative import in non-package") - if lastdot < 0: - globals['__package__'] = None - return None, '' - globals['__package__'] = name = modname[:lastdot] - - dot = len(name) - for x in range(level, 1, -1): - try: - dot = name.rindex('.', 0, dot) - except ValueError as e: - raise ValueError("attempted relative import beyond top-level " - "package") from e - name = name[:dot] - - try: - parent = sys.modules[name] - except BaseException as e: - if orig_level < 1: - warn("Parent module '%.200s' not found while handling absolute " - "import" % name) - parent = None - else: - raise SystemError("Parent module '%.200s' not loaded, cannot " - "perform relative import" % name) from e - - # We expect, but can't guarantee, if parent != None, that: - # - parent.__name__ == name - # - parent.__dict__ is globals - # If this is violated... Who cares? - return parent, name - -def load_next(mod, altmod, name, buf): - """ - mod, name, buf = load_next(mod, altmod, name, buf) - - altmod is either None or same as mod - """ - - if len(name) == 0: - # completely empty module name should only happen in - # 'from . import' (or '__import__("")') - return mod, None, buf - - dot = name.find('.') - if dot == 0: - raise ValueError('Empty module name') - - if dot < 0: - subname = name - next = None - else: - subname = name[:dot] - next = name[dot+1:] - - if buf != '': - buf += '.' - buf += subname - - result = import_submodule(mod, subname, buf) - if result is None and mod != altmod: - result = import_submodule(altmod, subname, subname) - if result is not None: - buf = subname - - if result is None: - raise ImportError("No module named %.200s" % name) - - return result, next, buf - - -# Need to keep track of what we've already reloaded to prevent cyclic evil -found_now = {} - -def import_submodule(mod, subname, fullname): - """m = import_submodule(mod, subname, fullname)""" - # Require: - # if mod == None: subname == fullname - # else: mod.__name__ + "." + subname == fullname - - global found_now - if fullname in found_now and fullname in sys.modules: - m = sys.modules[fullname] - else: - print('Reloading', fullname) - found_now[fullname] = 1 - oldm = sys.modules.get(fullname, None) - try: - if oldm is not None: - m = importlib.reload(oldm) - else: - m = importlib.import_module(subname, mod) - except: - # load_module probably removed name from modules because of - # the error. Put back the original module object. - if oldm: - sys.modules[fullname] = oldm - raise - - add_submodule(mod, m, fullname, subname) - - return m - -def add_submodule(mod, submod, fullname, subname): - """mod.{subname} = submod""" - if mod is None: - return #Nothing to do here. - - if submod is None: - submod = sys.modules[fullname] - - setattr(mod, subname, submod) - - return - -def ensure_fromlist(mod, fromlist, buf, recursive): - """Handle 'from module import a, b, c' imports.""" - if not hasattr(mod, '__path__'): - return - for item in fromlist: - if not hasattr(item, 'rindex'): - raise TypeError("Item in ``from list'' not a string") - if item == '*': - if recursive: - continue # avoid endless recursion - try: - all = mod.__all__ - except AttributeError: - pass - else: - ret = ensure_fromlist(mod, all, buf, 1) - if not ret: - return 0 - elif not hasattr(mod, item): - import_submodule(mod, item, buf + '.' + item) - -def deep_import_hook(name, globals=None, locals=None, fromlist=None, level=-1): - """Replacement for __import__()""" - parent, buf = get_parent(globals, level) - - head, name, buf = load_next(parent, None if level < 0 else parent, name, buf) - - tail = head - while name: - tail, name, buf = load_next(tail, tail, name, buf) - - # If tail is None, both get_parent and load_next found - # an empty module name: someone called __import__("") or - # doctored faulty bytecode - if tail is None: - raise ValueError('Empty module name') - - if not fromlist: - return head - - ensure_fromlist(tail, fromlist, buf, 0) - return tail - -modules_reloading = {} - -def deep_reload_hook(m): - """Replacement for reload().""" - # Hardcode this one as it would raise a NotImplementedError from the - # bowels of Python and screw up the import machinery after. - # unlike other imports the `exclude` list already in place is not enough. - - if m is types: - return m - if not isinstance(m, ModuleType): - raise TypeError("reload() argument must be module") - - name = m.__name__ - - if name not in sys.modules: - raise ImportError("reload(): module %.200s not in sys.modules" % name) - - global modules_reloading - try: - return modules_reloading[name] - except: - modules_reloading[name] = m - - try: - newm = importlib.reload(m) - except: - sys.modules[name] = m - raise - finally: - modules_reloading.clear() - return newm - -# Save the original hooks -original_reload = importlib.reload - -# Replacement for reload() -def reload( - module, - exclude=( - *sys.builtin_module_names, - "sys", - "os.path", - "builtins", - "__main__", - "numpy", - "numpy._globals", - ), -): - """Recursively reload all modules used in the given module. Optionally - takes a list of modules to exclude from reloading. The default exclude - list contains modules listed in sys.builtin_module_names with additional - sys, os.path, builtins and __main__, to prevent, e.g., resetting - display, exception, and io hooks. - """ - global found_now - for i in exclude: - found_now[i] = 1 - try: - with replace_import_hook(deep_import_hook): - return deep_reload_hook(module) - finally: - found_now = {} diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/sphinxext/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/sphinxext/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/base_doc/io/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/base_doc/io/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Supedsa/rvc-models/lib/infer_pack/modules.py b/spaces/Supedsa/rvc-models/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/Supedsa/rvc-models/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Supedsa/rvc-models/lib/infer_pack/onnx_inference.py b/spaces/Supedsa/rvc-models/lib/infer_pack/onnx_inference.py deleted file mode 100644 index 6517853be49e61c427cf7cd9b5ed203f6d5f367e..0000000000000000000000000000000000000000 --- a/spaces/Supedsa/rvc-models/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,145 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import ( - HarvestF0Predictor, - ) - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/config/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/config/__init__.py deleted file mode 100644 index a78ed118685fcfd869f7a72caf6b94621530196a..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/config/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .compat import downgrade_config, upgrade_config -from .config import CfgNode, get_cfg, global_cfg, set_global_cfg, configurable -from .instantiate import instantiate -from .lazy import LazyCall, LazyConfig - -__all__ = [ - "CfgNode", - "get_cfg", - "global_cfg", - "set_global_cfg", - "downgrade_config", - "upgrade_config", - "configurable", - "instantiate", - "LazyCall", - "LazyConfig", -] - - -from annotator.oneformer.detectron2.utils.env import fixup_module_metadata - -fixup_module_metadata(__name__, globals(), __all__) -del fixup_module_metadata diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/__init__.py deleted file mode 100644 index dcd88ff0c09d630577e3ac9f8afb5324a80a7be4..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .build_solver import build_lr_scheduler -from .config import add_deeplab_config -from .resnet import build_resnet_deeplab_backbone -from .semantic_seg import DeepLabV3Head, DeepLabV3PlusHead diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/tracing.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/tracing.py deleted file mode 100644 index 75661131505cee2eecd0b1c9dabcd4d7bd5453b2..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/tracing.py +++ /dev/null @@ -1,71 +0,0 @@ -import inspect -import torch - -from annotator.oneformer.detectron2.utils.env import TORCH_VERSION - -try: - from torch.fx._symbolic_trace import is_fx_tracing as is_fx_tracing_current - - tracing_current_exists = True -except ImportError: - tracing_current_exists = False - -try: - from torch.fx._symbolic_trace import _orig_module_call - - tracing_legacy_exists = True -except ImportError: - tracing_legacy_exists = False - - -@torch.jit.ignore -def is_fx_tracing_legacy() -> bool: - """ - Returns a bool indicating whether torch.fx is currently symbolically tracing a module. - Can be useful for gating module logic that is incompatible with symbolic tracing. - """ - return torch.nn.Module.__call__ is not _orig_module_call - - -@torch.jit.ignore -def is_fx_tracing() -> bool: - """Returns whether execution is currently in - Torch FX tracing mode""" - if TORCH_VERSION >= (1, 10) and tracing_current_exists: - return is_fx_tracing_current() - elif tracing_legacy_exists: - return is_fx_tracing_legacy() - else: - # Can't find either current or legacy tracing indication code. - # Enabling this assert_fx_safe() call regardless of tracing status. - return False - - -@torch.jit.ignore -def assert_fx_safe(condition: bool, message: str) -> torch.Tensor: - """An FX-tracing safe version of assert. - Avoids erroneous type assertion triggering when types are masked inside - an fx.proxy.Proxy object during tracing. - Args: condition - either a boolean expression or a string representing - the condition to test. If this assert triggers an exception when tracing - due to dynamic control flow, try encasing the expression in quotation - marks and supplying it as a string.""" - # Must return a concrete tensor for compatibility with PyTorch <=1.8. - # If <=1.8 compatibility is not needed, return type can be converted to None - if not is_fx_tracing(): - try: - if isinstance(condition, str): - caller_frame = inspect.currentframe().f_back - torch._assert( - eval(condition, caller_frame.f_globals, caller_frame.f_locals), message - ) - return torch.ones(1) - else: - torch._assert(condition, message) - return torch.ones(1) - except torch.fx.proxy.TraceError as e: - print( - "Found a non-FX compatible assertion. Skipping the check. Failure is shown below" - + str(e) - ) - return torch.zeros(1) diff --git a/spaces/TNR-5/chatorO/README.md b/spaces/TNR-5/chatorO/README.md deleted file mode 100644 index fe30d12eee133ebb5031967d895aa10751fe2265..0000000000000000000000000000000000000000 --- a/spaces/TNR-5/chatorO/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: gpt4f -emoji: ♾️💬 -colorFrom: indigo -colorTo: yellow -sdk: docker -pinned: false -duplicated_from: rishi1985/gpt4f-4 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/more_itertools/recipes.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/more_itertools/recipes.py deleted file mode 100644 index 521abd7c2ca633f90a5ba13a8060c5c3d0c32205..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/more_itertools/recipes.py +++ /dev/null @@ -1,620 +0,0 @@ -"""Imported from the recipes section of the itertools documentation. - -All functions taken from the recipes section of the itertools library docs -[1]_. -Some backward-compatible usability improvements have been made. - -.. [1] http://docs.python.org/library/itertools.html#recipes - -""" -import warnings -from collections import deque -from itertools import ( - chain, - combinations, - count, - cycle, - groupby, - islice, - repeat, - starmap, - tee, - zip_longest, -) -import operator -from random import randrange, sample, choice - -__all__ = [ - 'all_equal', - 'consume', - 'convolve', - 'dotproduct', - 'first_true', - 'flatten', - 'grouper', - 'iter_except', - 'ncycles', - 'nth', - 'nth_combination', - 'padnone', - 'pad_none', - 'pairwise', - 'partition', - 'powerset', - 'prepend', - 'quantify', - 'random_combination_with_replacement', - 'random_combination', - 'random_permutation', - 'random_product', - 'repeatfunc', - 'roundrobin', - 'tabulate', - 'tail', - 'take', - 'unique_everseen', - 'unique_justseen', -] - - -def take(n, iterable): - """Return first *n* items of the iterable as a list. - - >>> take(3, range(10)) - [0, 1, 2] - - If there are fewer than *n* items in the iterable, all of them are - returned. - - >>> take(10, range(3)) - [0, 1, 2] - - """ - return list(islice(iterable, n)) - - -def tabulate(function, start=0): - """Return an iterator over the results of ``func(start)``, - ``func(start + 1)``, ``func(start + 2)``... - - *func* should be a function that accepts one integer argument. - - If *start* is not specified it defaults to 0. It will be incremented each - time the iterator is advanced. - - >>> square = lambda x: x ** 2 - >>> iterator = tabulate(square, -3) - >>> take(4, iterator) - [9, 4, 1, 0] - - """ - return map(function, count(start)) - - -def tail(n, iterable): - """Return an iterator over the last *n* items of *iterable*. - - >>> t = tail(3, 'ABCDEFG') - >>> list(t) - ['E', 'F', 'G'] - - """ - return iter(deque(iterable, maxlen=n)) - - -def consume(iterator, n=None): - """Advance *iterable* by *n* steps. If *n* is ``None``, consume it - entirely. - - Efficiently exhausts an iterator without returning values. Defaults to - consuming the whole iterator, but an optional second argument may be - provided to limit consumption. - - >>> i = (x for x in range(10)) - >>> next(i) - 0 - >>> consume(i, 3) - >>> next(i) - 4 - >>> consume(i) - >>> next(i) - Traceback (most recent call last): - File "", line 1, in - StopIteration - - If the iterator has fewer items remaining than the provided limit, the - whole iterator will be consumed. - - >>> i = (x for x in range(3)) - >>> consume(i, 5) - >>> next(i) - Traceback (most recent call last): - File "", line 1, in - StopIteration - - """ - # Use functions that consume iterators at C speed. - if n is None: - # feed the entire iterator into a zero-length deque - deque(iterator, maxlen=0) - else: - # advance to the empty slice starting at position n - next(islice(iterator, n, n), None) - - -def nth(iterable, n, default=None): - """Returns the nth item or a default value. - - >>> l = range(10) - >>> nth(l, 3) - 3 - >>> nth(l, 20, "zebra") - 'zebra' - - """ - return next(islice(iterable, n, None), default) - - -def all_equal(iterable): - """ - Returns ``True`` if all the elements are equal to each other. - - >>> all_equal('aaaa') - True - >>> all_equal('aaab') - False - - """ - g = groupby(iterable) - return next(g, True) and not next(g, False) - - -def quantify(iterable, pred=bool): - """Return the how many times the predicate is true. - - >>> quantify([True, False, True]) - 2 - - """ - return sum(map(pred, iterable)) - - -def pad_none(iterable): - """Returns the sequence of elements and then returns ``None`` indefinitely. - - >>> take(5, pad_none(range(3))) - [0, 1, 2, None, None] - - Useful for emulating the behavior of the built-in :func:`map` function. - - See also :func:`padded`. - - """ - return chain(iterable, repeat(None)) - - -padnone = pad_none - - -def ncycles(iterable, n): - """Returns the sequence elements *n* times - - >>> list(ncycles(["a", "b"], 3)) - ['a', 'b', 'a', 'b', 'a', 'b'] - - """ - return chain.from_iterable(repeat(tuple(iterable), n)) - - -def dotproduct(vec1, vec2): - """Returns the dot product of the two iterables. - - >>> dotproduct([10, 10], [20, 20]) - 400 - - """ - return sum(map(operator.mul, vec1, vec2)) - - -def flatten(listOfLists): - """Return an iterator flattening one level of nesting in a list of lists. - - >>> list(flatten([[0, 1], [2, 3]])) - [0, 1, 2, 3] - - See also :func:`collapse`, which can flatten multiple levels of nesting. - - """ - return chain.from_iterable(listOfLists) - - -def repeatfunc(func, times=None, *args): - """Call *func* with *args* repeatedly, returning an iterable over the - results. - - If *times* is specified, the iterable will terminate after that many - repetitions: - - >>> from operator import add - >>> times = 4 - >>> args = 3, 5 - >>> list(repeatfunc(add, times, *args)) - [8, 8, 8, 8] - - If *times* is ``None`` the iterable will not terminate: - - >>> from random import randrange - >>> times = None - >>> args = 1, 11 - >>> take(6, repeatfunc(randrange, times, *args)) # doctest:+SKIP - [2, 4, 8, 1, 8, 4] - - """ - if times is None: - return starmap(func, repeat(args)) - return starmap(func, repeat(args, times)) - - -def _pairwise(iterable): - """Returns an iterator of paired items, overlapping, from the original - - >>> take(4, pairwise(count())) - [(0, 1), (1, 2), (2, 3), (3, 4)] - - On Python 3.10 and above, this is an alias for :func:`itertools.pairwise`. - - """ - a, b = tee(iterable) - next(b, None) - yield from zip(a, b) - - -try: - from itertools import pairwise as itertools_pairwise -except ImportError: - pairwise = _pairwise -else: - - def pairwise(iterable): - yield from itertools_pairwise(iterable) - - pairwise.__doc__ = _pairwise.__doc__ - - -def grouper(iterable, n, fillvalue=None): - """Collect data into fixed-length chunks or blocks. - - >>> list(grouper('ABCDEFG', 3, 'x')) - [('A', 'B', 'C'), ('D', 'E', 'F'), ('G', 'x', 'x')] - - """ - if isinstance(iterable, int): - warnings.warn( - "grouper expects iterable as first parameter", DeprecationWarning - ) - n, iterable = iterable, n - args = [iter(iterable)] * n - return zip_longest(fillvalue=fillvalue, *args) - - -def roundrobin(*iterables): - """Yields an item from each iterable, alternating between them. - - >>> list(roundrobin('ABC', 'D', 'EF')) - ['A', 'D', 'E', 'B', 'F', 'C'] - - This function produces the same output as :func:`interleave_longest`, but - may perform better for some inputs (in particular when the number of - iterables is small). - - """ - # Recipe credited to George Sakkis - pending = len(iterables) - nexts = cycle(iter(it).__next__ for it in iterables) - while pending: - try: - for next in nexts: - yield next() - except StopIteration: - pending -= 1 - nexts = cycle(islice(nexts, pending)) - - -def partition(pred, iterable): - """ - Returns a 2-tuple of iterables derived from the input iterable. - The first yields the items that have ``pred(item) == False``. - The second yields the items that have ``pred(item) == True``. - - >>> is_odd = lambda x: x % 2 != 0 - >>> iterable = range(10) - >>> even_items, odd_items = partition(is_odd, iterable) - >>> list(even_items), list(odd_items) - ([0, 2, 4, 6, 8], [1, 3, 5, 7, 9]) - - If *pred* is None, :func:`bool` is used. - - >>> iterable = [0, 1, False, True, '', ' '] - >>> false_items, true_items = partition(None, iterable) - >>> list(false_items), list(true_items) - ([0, False, ''], [1, True, ' ']) - - """ - if pred is None: - pred = bool - - evaluations = ((pred(x), x) for x in iterable) - t1, t2 = tee(evaluations) - return ( - (x for (cond, x) in t1 if not cond), - (x for (cond, x) in t2 if cond), - ) - - -def powerset(iterable): - """Yields all possible subsets of the iterable. - - >>> list(powerset([1, 2, 3])) - [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)] - - :func:`powerset` will operate on iterables that aren't :class:`set` - instances, so repeated elements in the input will produce repeated elements - in the output. Use :func:`unique_everseen` on the input to avoid generating - duplicates: - - >>> seq = [1, 1, 0] - >>> list(powerset(seq)) - [(), (1,), (1,), (0,), (1, 1), (1, 0), (1, 0), (1, 1, 0)] - >>> from more_itertools import unique_everseen - >>> list(powerset(unique_everseen(seq))) - [(), (1,), (0,), (1, 0)] - - """ - s = list(iterable) - return chain.from_iterable(combinations(s, r) for r in range(len(s) + 1)) - - -def unique_everseen(iterable, key=None): - """ - Yield unique elements, preserving order. - - >>> list(unique_everseen('AAAABBBCCDAABBB')) - ['A', 'B', 'C', 'D'] - >>> list(unique_everseen('ABBCcAD', str.lower)) - ['A', 'B', 'C', 'D'] - - Sequences with a mix of hashable and unhashable items can be used. - The function will be slower (i.e., `O(n^2)`) for unhashable items. - - Remember that ``list`` objects are unhashable - you can use the *key* - parameter to transform the list to a tuple (which is hashable) to - avoid a slowdown. - - >>> iterable = ([1, 2], [2, 3], [1, 2]) - >>> list(unique_everseen(iterable)) # Slow - [[1, 2], [2, 3]] - >>> list(unique_everseen(iterable, key=tuple)) # Faster - [[1, 2], [2, 3]] - - Similary, you may want to convert unhashable ``set`` objects with - ``key=frozenset``. For ``dict`` objects, - ``key=lambda x: frozenset(x.items())`` can be used. - - """ - seenset = set() - seenset_add = seenset.add - seenlist = [] - seenlist_add = seenlist.append - use_key = key is not None - - for element in iterable: - k = key(element) if use_key else element - try: - if k not in seenset: - seenset_add(k) - yield element - except TypeError: - if k not in seenlist: - seenlist_add(k) - yield element - - -def unique_justseen(iterable, key=None): - """Yields elements in order, ignoring serial duplicates - - >>> list(unique_justseen('AAAABBBCCDAABBB')) - ['A', 'B', 'C', 'D', 'A', 'B'] - >>> list(unique_justseen('ABBCcAD', str.lower)) - ['A', 'B', 'C', 'A', 'D'] - - """ - return map(next, map(operator.itemgetter(1), groupby(iterable, key))) - - -def iter_except(func, exception, first=None): - """Yields results from a function repeatedly until an exception is raised. - - Converts a call-until-exception interface to an iterator interface. - Like ``iter(func, sentinel)``, but uses an exception instead of a sentinel - to end the loop. - - >>> l = [0, 1, 2] - >>> list(iter_except(l.pop, IndexError)) - [2, 1, 0] - - """ - try: - if first is not None: - yield first() - while 1: - yield func() - except exception: - pass - - -def first_true(iterable, default=None, pred=None): - """ - Returns the first true value in the iterable. - - If no true value is found, returns *default* - - If *pred* is not None, returns the first item for which - ``pred(item) == True`` . - - >>> first_true(range(10)) - 1 - >>> first_true(range(10), pred=lambda x: x > 5) - 6 - >>> first_true(range(10), default='missing', pred=lambda x: x > 9) - 'missing' - - """ - return next(filter(pred, iterable), default) - - -def random_product(*args, repeat=1): - """Draw an item at random from each of the input iterables. - - >>> random_product('abc', range(4), 'XYZ') # doctest:+SKIP - ('c', 3, 'Z') - - If *repeat* is provided as a keyword argument, that many items will be - drawn from each iterable. - - >>> random_product('abcd', range(4), repeat=2) # doctest:+SKIP - ('a', 2, 'd', 3) - - This equivalent to taking a random selection from - ``itertools.product(*args, **kwarg)``. - - """ - pools = [tuple(pool) for pool in args] * repeat - return tuple(choice(pool) for pool in pools) - - -def random_permutation(iterable, r=None): - """Return a random *r* length permutation of the elements in *iterable*. - - If *r* is not specified or is ``None``, then *r* defaults to the length of - *iterable*. - - >>> random_permutation(range(5)) # doctest:+SKIP - (3, 4, 0, 1, 2) - - This equivalent to taking a random selection from - ``itertools.permutations(iterable, r)``. - - """ - pool = tuple(iterable) - r = len(pool) if r is None else r - return tuple(sample(pool, r)) - - -def random_combination(iterable, r): - """Return a random *r* length subsequence of the elements in *iterable*. - - >>> random_combination(range(5), 3) # doctest:+SKIP - (2, 3, 4) - - This equivalent to taking a random selection from - ``itertools.combinations(iterable, r)``. - - """ - pool = tuple(iterable) - n = len(pool) - indices = sorted(sample(range(n), r)) - return tuple(pool[i] for i in indices) - - -def random_combination_with_replacement(iterable, r): - """Return a random *r* length subsequence of elements in *iterable*, - allowing individual elements to be repeated. - - >>> random_combination_with_replacement(range(3), 5) # doctest:+SKIP - (0, 0, 1, 2, 2) - - This equivalent to taking a random selection from - ``itertools.combinations_with_replacement(iterable, r)``. - - """ - pool = tuple(iterable) - n = len(pool) - indices = sorted(randrange(n) for i in range(r)) - return tuple(pool[i] for i in indices) - - -def nth_combination(iterable, r, index): - """Equivalent to ``list(combinations(iterable, r))[index]``. - - The subsequences of *iterable* that are of length *r* can be ordered - lexicographically. :func:`nth_combination` computes the subsequence at - sort position *index* directly, without computing the previous - subsequences. - - >>> nth_combination(range(5), 3, 5) - (0, 3, 4) - - ``ValueError`` will be raised If *r* is negative or greater than the length - of *iterable*. - ``IndexError`` will be raised if the given *index* is invalid. - """ - pool = tuple(iterable) - n = len(pool) - if (r < 0) or (r > n): - raise ValueError - - c = 1 - k = min(r, n - r) - for i in range(1, k + 1): - c = c * (n - k + i) // i - - if index < 0: - index += c - - if (index < 0) or (index >= c): - raise IndexError - - result = [] - while r: - c, n, r = c * r // n, n - 1, r - 1 - while index >= c: - index -= c - c, n = c * (n - r) // n, n - 1 - result.append(pool[-1 - n]) - - return tuple(result) - - -def prepend(value, iterator): - """Yield *value*, followed by the elements in *iterator*. - - >>> value = '0' - >>> iterator = ['1', '2', '3'] - >>> list(prepend(value, iterator)) - ['0', '1', '2', '3'] - - To prepend multiple values, see :func:`itertools.chain` - or :func:`value_chain`. - - """ - return chain([value], iterator) - - -def convolve(signal, kernel): - """Convolve the iterable *signal* with the iterable *kernel*. - - >>> signal = (1, 2, 3, 4, 5) - >>> kernel = [3, 2, 1] - >>> list(convolve(signal, kernel)) - [3, 8, 14, 20, 26, 14, 5] - - Note: the input arguments are not interchangeable, as the *kernel* - is immediately consumed and stored. - - """ - kernel = tuple(kernel)[::-1] - n = len(kernel) - window = deque([0], maxlen=n) * n - for x in chain(signal, repeat(0, n - 1)): - window.append(x) - yield sum(map(operator.mul, kernel, window)) diff --git a/spaces/TencentARC/T2I-Adapter-SDXL/assets/README.md b/spaces/TencentARC/T2I-Adapter-SDXL/assets/README.md deleted file mode 100644 index 7400bb40f776b376192c18c43b6f910d29751648..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/T2I-Adapter-SDXL/assets/README.md +++ /dev/null @@ -1,8 +0,0 @@ -These images were from the following URL: - -- https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg -- https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_sketch.png -- https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_lin.jpg -- https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_mid.jpg -- https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_zeo.jpg -- https://huggingface.co/Adapter/t2iadapter/resolve/main/people.jpg diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/serialize.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/serialize.py deleted file mode 100644 index 0b38862804b70cf1159a9bc93acdef73c184d883..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/serialize.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import cloudpickle - - -class PicklableWrapper(object): - """ - Wrap an object to make it more picklable, note that it uses - heavy weight serialization libraries that are slower than pickle. - It's best to use it only on closures (which are usually not picklable). - - This is a simplified version of - https://github.com/joblib/joblib/blob/master/joblib/externals/loky/cloudpickle_wrapper.py - """ - - def __init__(self, obj): - while isinstance(obj, PicklableWrapper): - # Wrapping an object twice is no-op - obj = obj._obj - self._obj = obj - - def __reduce__(self): - s = cloudpickle.dumps(self._obj) - return cloudpickle.loads, (s,) - - def __call__(self, *args, **kwargs): - return self._obj(*args, **kwargs) - - def __getattr__(self, attr): - # Ensure that the wrapped object can be used seamlessly as the previous object. - if attr not in ["_obj"]: - return getattr(self._obj, attr) - return getattr(self, attr) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/fed_loss.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/fed_loss.py deleted file mode 100644 index 290f0f07204e78ef2c4ff918aa500b04330279e6..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/fed_loss.py +++ /dev/null @@ -1,31 +0,0 @@ -import torch -import json -import numpy as np -from torch.nn import functional as F - -def load_class_freq( - path='datasets/lvis/lvis_v1_train_cat_info.json', - freq_weight=0.5): - cat_info = json.load(open(path, 'r')) - cat_info = torch.tensor( - [c['image_count'] for c in sorted(cat_info, key=lambda x: x['id'])]) - freq_weight = cat_info.float() ** freq_weight - return freq_weight - -def get_fed_loss_inds( - gt_classes, num_sample_cats=50, C=1203, \ - weight=None, fed_cls_inds=-1): - appeared = torch.unique(gt_classes) # C' - prob = appeared.new_ones(C + 1).float() - prob[-1] = 0 - if len(appeared) < num_sample_cats: - if weight is not None: - prob[:C] = weight.float().clone() - prob[appeared] = 0 - if fed_cls_inds > 0: - prob[fed_cls_inds:] = 0 - more_appeared = torch.multinomial( - prob, num_sample_cats - len(appeared), - replacement=False) - appeared = torch.cat([appeared, more_appeared]) - return appeared \ No newline at end of file diff --git a/spaces/Thaweewat/ControlNet-Architecture/app.py b/spaces/Thaweewat/ControlNet-Architecture/app.py deleted file mode 100644 index b6206efea8fcd1556b7ee907e13cd1b14694d6d8..0000000000000000000000000000000000000000 --- a/spaces/Thaweewat/ControlNet-Architecture/app.py +++ /dev/null @@ -1,119 +0,0 @@ -import cv2 -import einops -import gradio as gr -import numpy as np -import torch - -from pytorch_lightning import seed_everything -from util import resize_image, HWC3, apply_canny -from ldm.models.diffusion.ddim import DDIMSampler -from annotator.openpose import apply_openpose -from cldm.model import create_model, load_state_dict -from huggingface_hub import hf_hub_url, cached_download - - -REPO_ID = "lllyasviel/ControlNet" -scribble_checkpoint = "models/control_sd15_scribble.pth" -scribble_model = create_model('./models/cldm_v15.yaml').cpu() -scribble_model.load_state_dict(load_state_dict(cached_download( - hf_hub_url(REPO_ID, scribble_checkpoint) -), location='cpu')) -scribble_model = scribble_model.cuda() -ddim_sampler_scribble = DDIMSampler(scribble_model) -save_memory = False - -def process(input_image, prompt, input_control, num_samples, image_resolution, ddim_steps, scale, seed, eta, low_threshold, high_threshold): - # TODO: Clean Function for single Task - - if input_control == "Scribble": - return process_scribble(input_image, prompt, num_samples, image_resolution, ddim_steps, scale, seed, eta) - -def process_scribble(input_image, prompt, num_samples, image_resolution, ddim_steps, scale, seed, eta): - - with torch.no_grad(): - img = resize_image(HWC3(input_image), image_resolution) - H, W, C = img.shape - - detected_map = np.zeros_like(img, dtype=np.uint8) - detected_map[np.min(img, axis=2) < 127] = 255 - - control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0 - control = torch.stack([control for _ in range(num_samples)], dim=0) - control = einops.rearrange(control, 'b h w c -> b c h w').clone() - - seed_everything(seed) - - if save_memory: - scribble_model.low_vram_shift(is_diffusing=False) - - cond = {"c_concat": [control], "c_crossattn": [scribble_model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]} - un_cond = {"c_concat": [control], "c_crossattn": [scribble_model.get_learned_conditioning([n_prompt] * num_samples)]} - shape = (4, H // 8, W // 8) - - if save_memory: - scribble_model.low_vram_shift(is_diffusing=False) - - samples, intermediates = ddim_sampler_scribble.sample(ddim_steps, num_samples, - shape, cond, verbose=False, eta=eta, - unconditional_guidance_scale=scale, - unconditional_conditioning=un_cond) - - if save_memory: - scribble_model.low_vram_shift(is_diffusing=False) - - x_samples = scribble_model.decode_first_stage(samples) - x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8) - - results = [x_samples[i] for i in range(num_samples)] - return [255 - detected_map] + results - - -def create_canvas(w, h): - new_control_options = ["Interactive Scribble"] - return np.zeros(shape=(h, w, 3), dtype=np.uint8) + 255 - - -block = gr.Blocks().queue() -control_task_list = [ - "Scribble" -] - -a_prompt = 'best quality, extremely detailed, architecture render, photorealistic, hyper realistic, surreal, dali, 3d rendering, render, 8k, 16k, extremely detailed, unreal engine, octane, maya' -n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, pubic hair,extra digit, number, text, watermark, fewer digits, cropped, worst quality, low quality' - -with block: - gr.Markdown("## ControlNet - Architectural Sketch to Render Image") - gr.HTML(''' -

      - Demo for ControlNet, Optimized for architectural sketch, based on lllyasviel ControlNet implementation. -

      - ''') - gr.HTML(''' -

      - HF Space created by Thaweewat Rugsujarit, If you have any suggestions or feedback, please feel free to contact me via Linkedin . -

      - ''') - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source='upload', type="numpy") - input_control = gr.Dropdown(control_task_list, value="Scribble", label="Task") - prompt = gr.Textbox(label="Architectural Style") - run_button = gr.Button(label="Run") - - with gr.Accordion("Advanced options", open=False): - num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1) - image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=256) - low_threshold = gr.Slider(label="Canny low threshold", minimum=1, maximum=255, value=100, step=1) - high_threshold = gr.Slider(label="Canny high threshold", minimum=1, maximum=255, value=200, step=1) - ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1) - scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1) - seed = gr.Slider(label="Seed", minimum=0, maximum=2147483647, step=1, randomize=True) - eta = gr.Slider(label="eta (DDIM)", minimum=0.0,maximum =1.0, value=0.0, step=0.1) - - with gr.Column(): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto') - ips = [input_image, prompt, input_control, num_samples, image_resolution, ddim_steps, scale, seed, eta, low_threshold, high_threshold] - run_button.click(fn=process, inputs=ips, outputs=[result_gallery]) - gr.Markdown("![visitor badge](https://visitor-badge.glitch.me/badge?page_id=Thaweewat.ControlNet-Architecture)") - -block.launch(debug = True) \ No newline at end of file diff --git a/spaces/VIOD/Real-CUGAN/upcunet_v3.py b/spaces/VIOD/Real-CUGAN/upcunet_v3.py deleted file mode 100644 index f7919a6cc9efe3b8af73a73e30825a4c7d7d76da..0000000000000000000000000000000000000000 --- a/spaces/VIOD/Real-CUGAN/upcunet_v3.py +++ /dev/null @@ -1,714 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F -import os, sys -import numpy as np - -root_path = os.path.abspath('.') -sys.path.append(root_path) - - -class SEBlock(nn.Module): - def __init__(self, in_channels, reduction=8, bias=False): - super(SEBlock, self).__init__() - self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias) - self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias) - - def forward(self, x): - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half() - else: - x0 = torch.mean(x, dim=(2, 3), keepdim=True) - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - def forward_mean(self, x, x0): - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - -class UNetConv(nn.Module): - def __init__(self, in_channels, mid_channels, out_channels, se): - super(UNetConv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d(in_channels, mid_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - nn.Conv2d(mid_channels, out_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - ) - if se: - self.seblock = SEBlock(out_channels, reduction=8, bias=True) - else: - self.seblock = None - - def forward(self, x): - z = self.conv(x) - if self.seblock is not None: - z = self.seblock(z) - return z - - -class UNet1(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet1x3(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1x3, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet2(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet2, self).__init__() - - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 64, 128, se=True) - self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0) - self.conv3 = UNetConv(128, 256, 128, se=True) - self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0) - self.conv4 = UNetConv(128, 64, 64, se=True) - self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv5 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3(x3) - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4(x2 + x3) - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - def forward_a(self, x): # conv234结尾有se - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x2): # conv234结尾有se - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3.conv(x3) - return x3 - - def forward_c(self, x2, x3): # conv234结尾有se - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4.conv(x2 + x3) - return x4 - - def forward_d(self, x1, x4): # conv234结尾有se - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - -class UpCunet2x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet2x, self).__init__() - self.unet1 = UNet1(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 36, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 36, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2] - return res # - - -class UpCunet3x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet3x, self).__init__() - self.unet1 = UNet1x3(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 4 + 1) * 4 - pw = ((w0 - 1) // 4 + 1) * 4 - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除 - else: - crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 28, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 28, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop # - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3] - return res - - -class UpCunet4x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet4x, self).__init__() - self.unet1 = UNet1(in_channels, 64, deconv=True) - self.unet2 = UNet2(64, 64, deconv=False) - self.ps = nn.PixelShuffle(2) - self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True) - - def forward(self, x, tile_mode): - n, c, h0, w0 = x.shape - x00 = x - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - x = self.conv_final(x) - x = F.pad(x, (-1, -1, -1, -1)) - x = self.ps(x) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4] - x += F.interpolate(x00, scale_factor=4, mode='nearest') - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 38, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 38, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - x_crop = self.conv_final(x_crop) - x_crop = F.pad(x_crop, (-1, -1, -1, -1)) - x_crop = self.ps(x_crop) - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - # print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape) - res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4] - res += F.interpolate(x00, scale_factor=4, mode='nearest') - return res # - - -class RealWaifuUpScaler(object): - def __init__(self, scale, weight_path, half, device): - weight = torch.load(weight_path, map_location="cpu") - self.model = eval("UpCunet%sx" % scale)() - if (half == True): - self.model = self.model.half().to(device) - else: - self.model = self.model.to(device) - self.model.load_state_dict(weight, strict=True) - self.model.eval() - self.half = half - self.device = device - - def np2tensor(self, np_frame): - if (self.half == False): - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255 - else: - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255 - - def tensor2np(self, tensor): - if (self.half == False): - return ( - np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0))) - else: - return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), - (1, 2, 0))) - - def __call__(self, frame, tile_mode): - with torch.no_grad(): - tensor = self.np2tensor(frame) - result = self.tensor2np(self.model(tensor, tile_mode)) - return result - - -if __name__ == "__main__": - ###########inference_img - import time, cv2, sys - from time import time as ttime - - for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3), - ("weights_v3/up4x-latest-denoise3x.pth", 4)]: - for tile_mode in [0, 1, 2, 3, 4]: - upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0") - input_dir = "%s/input_dir1" % root_path - output_dir = "%s/opt-dir-all-test" % root_path - os.makedirs(output_dir, exist_ok=True) - for name in os.listdir(input_dir): - print(name) - tmp = name.split(".") - inp_path = os.path.join(input_dir, name) - suffix = tmp[-1] - prefix = ".".join(tmp[:-1]) - tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - print(inp_path, tmp_path) - # 支持中文路径 - # os.link(inp_path, tmp_path)#win用硬链接 - os.symlink(inp_path, tmp_path) # linux用软链接 - frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]] - t0 = ttime() - result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1] - t1 = ttime() - print(prefix, "done", t1 - t0) - tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - cv2.imwrite(tmp_opt_path, result) - n = 0 - while (1): - if (n == 0): - suffix = "_%sx_tile%s.png" % (scale, tile_mode) - else: - suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) # - if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False): - break - else: - n += 1 - final_opt_path = os.path.join(output_dir, prefix + suffix) - os.rename(tmp_opt_path, final_opt_path) - os.remove(tmp_path) diff --git a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h b/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h deleted file mode 100644 index b2b88e8c46f19b6db0933163e57ccdb51180f517..0000000000000000000000000000000000000000 --- a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h +++ /dev/null @@ -1,35 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/show_install.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/show_install.py deleted file mode 100644 index b9e6cc3be84ed684ec6984b1a7cfe7b673a72c8d..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/show_install.py +++ /dev/null @@ -1,8 +0,0 @@ -from ..script import * -from .collect_env import * - -# Temporary POC for module-based script -@call_parse -def main(show_nvidia_smi:Param(opt=False, nargs='?', type=bool)=False): - return show_install(show_nvidia_smi) - diff --git a/spaces/Yan233th/so-vits-svc-models/vdecoder/hifigan/utils.py b/spaces/Yan233th/so-vits-svc-models/vdecoder/hifigan/utils.py deleted file mode 100644 index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000 --- a/spaces/Yan233th/so-vits-svc-models/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -# matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - diff --git a/spaces/Yuliang/ECON/lib/pymafx/models/maf_extractor.py b/spaces/Yuliang/ECON/lib/pymafx/models/maf_extractor.py deleted file mode 100644 index ffe4e73427e30848798df2f57e835a8b10ae2934..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/pymafx/models/maf_extractor.py +++ /dev/null @@ -1,272 +0,0 @@ -# This script is borrowed and extended from https://github.com/shunsukesaito/PIFu/blob/master/lib/model/SurfaceClassifier.py - -import logging - -import numpy as np -import scipy -import torch -import torch.nn as nn -import torch.nn.functional as F - -from lib.pymafx.core import path_config -from lib.pymafx.utils.geometry import projection - -logger = logging.getLogger(__name__) - -from lib.pymafx.utils.imutils import j2d_processing - -from .transformers.net_utils import PosEnSine -from .transformers.transformer_basics import OurMultiheadAttention - - -class TransformerDecoderUnit(nn.Module): - def __init__( - self, feat_dim, attri_dim=0, n_head=8, pos_en_flag=True, attn_type='softmax', P=None - ): - super(TransformerDecoderUnit, self).__init__() - self.feat_dim = feat_dim - self.attn_type = attn_type - self.pos_en_flag = pos_en_flag - self.P = P - - assert attri_dim == 0 - if self.pos_en_flag: - pe_dim = 10 - self.pos_en = PosEnSine(pe_dim) - else: - pe_dim = 0 - self.attn = OurMultiheadAttention( - feat_dim + attri_dim + pe_dim * 3, feat_dim + pe_dim * 3, feat_dim, n_head - ) # cross-attention - - self.linear1 = nn.Conv2d(self.feat_dim, self.feat_dim, 1) - self.linear2 = nn.Conv2d(self.feat_dim, self.feat_dim, 1) - self.activation = nn.ReLU(inplace=True) - - self.norm = nn.BatchNorm2d(self.feat_dim) - - def forward(self, q, k, v, pos=None): - if self.pos_en_flag: - q_pos_embed = self.pos_en(q, pos) - k_pos_embed = self.pos_en(k) - - q = torch.cat([q, q_pos_embed], dim=1) - k = torch.cat([k, k_pos_embed], dim=1) - # else: - # q_pos_embed = 0 - # k_pos_embed = 0 - - # cross-multi-head attention - out = self.attn(q=q, k=k, v=v, attn_type=self.attn_type, P=self.P)[0] - - # feed forward - out2 = self.linear2(self.activation(self.linear1(out))) - out = out + out2 - out = self.norm(out) - - return out - - -class Mesh_Sampler(nn.Module): - ''' Mesh Up/Down-sampling - ''' - def __init__(self, type='smpl', level=2, device=torch.device('cuda'), option=None): - super().__init__() - - # downsample SMPL mesh and assign part labels - if type == 'smpl': - # from https://github.com/nkolot/GraphCMR/blob/master/data/mesh_downsampling.npz - smpl_mesh_graph = np.load( - path_config.SMPL_DOWNSAMPLING, allow_pickle=True, encoding='latin1' - ) - - A = smpl_mesh_graph['A'] - U = smpl_mesh_graph['U'] - D = smpl_mesh_graph['D'] # shape: (2,) - elif type == 'mano': - # from https://github.com/microsoft/MeshGraphormer/blob/main/src/modeling/data/mano_downsampling.npz - mano_mesh_graph = np.load( - path_config.MANO_DOWNSAMPLING, allow_pickle=True, encoding='latin1' - ) - - A = mano_mesh_graph['A'] - U = mano_mesh_graph['U'] - D = mano_mesh_graph['D'] # shape: (2,) - - # downsampling - ptD = [] - for lv in range(len(D)): - d = scipy.sparse.coo_matrix(D[lv]) - i = torch.LongTensor(np.array([d.row, d.col])) - v = torch.FloatTensor(d.data) - ptD.append(torch.sparse.FloatTensor(i, v, d.shape)) - - # downsampling mapping from 6890 points to 431 points - # ptD[0].to_dense() - Size: [1723, 6890] , [195, 778] - # ptD[1].to_dense() - Size: [431, 1723] , [49, 195] - if level == 2: - Dmap = torch.matmul(ptD[1].to_dense(), ptD[0].to_dense()) # 6890 -> 431 - elif level == 1: - Dmap = ptD[0].to_dense() # - self.register_buffer('Dmap', Dmap) - - # upsampling - ptU = [] - for lv in range(len(U)): - d = scipy.sparse.coo_matrix(U[lv]) - i = torch.LongTensor(np.array([d.row, d.col])) - v = torch.FloatTensor(d.data) - ptU.append(torch.sparse.FloatTensor(i, v, d.shape)) - - # upsampling mapping from 431 points to 6890 points - # ptU[0].to_dense() - Size: [6890, 1723] - # ptU[1].to_dense() - Size: [1723, 431] - if level == 2: - Umap = torch.matmul(ptU[0].to_dense(), ptU[1].to_dense()) # 431 -> 6890 - elif level == 1: - Umap = ptU[0].to_dense() # - self.register_buffer('Umap', Umap) - - def downsample(self, x): - return torch.matmul(self.Dmap.unsqueeze(0), x) # [B, 431, 3] - - def upsample(self, x): - return torch.matmul(self.Umap.unsqueeze(0), x) # [B, 6890, 3] - - def forward(self, x, mode='downsample'): - if mode == 'downsample': - return self.downsample(x) - elif mode == 'upsample': - return self.upsample(x) - - -class MAF_Extractor(nn.Module): - ''' Mesh-aligned Feature Extrator - As discussed in the paper, we extract mesh-aligned features based on 2D projection of the mesh vertices. - The features extrated from spatial feature maps will go through a MLP for dimension reduction. - ''' - def __init__( - self, filter_channels, device=torch.device('cuda'), iwp_cam_mode=True, option=None - ): - super().__init__() - - self.device = device - self.filters = [] - self.num_views = 1 - self.last_op = nn.ReLU(True) - - self.iwp_cam_mode = iwp_cam_mode - - for l in range(0, len(filter_channels) - 1): - if 0 != l: - self.filters.append( - nn.Conv1d(filter_channels[l] + filter_channels[0], filter_channels[l + 1], 1) - ) - else: - self.filters.append(nn.Conv1d(filter_channels[l], filter_channels[l + 1], 1)) - - self.add_module("conv%d" % l, self.filters[l]) - - # downsample SMPL mesh and assign part labels - # from https://github.com/nkolot/GraphCMR/blob/master/data/mesh_downsampling.npz - smpl_mesh_graph = np.load( - path_config.SMPL_DOWNSAMPLING, allow_pickle=True, encoding='latin1' - ) - - A = smpl_mesh_graph['A'] - U = smpl_mesh_graph['U'] - D = smpl_mesh_graph['D'] # shape: (2,) - - # downsampling - ptD = [] - for level in range(len(D)): - d = scipy.sparse.coo_matrix(D[level]) - i = torch.LongTensor(np.array([d.row, d.col])) - v = torch.FloatTensor(d.data) - ptD.append(torch.sparse.FloatTensor(i, v, d.shape)) - - # downsampling mapping from 6890 points to 431 points - # ptD[0].to_dense() - Size: [1723, 6890] - # ptD[1].to_dense() - Size: [431. 1723] - Dmap = torch.matmul(ptD[1].to_dense(), ptD[0].to_dense()) # 6890 -> 431 - self.register_buffer('Dmap', Dmap) - - # upsampling - ptU = [] - for level in range(len(U)): - d = scipy.sparse.coo_matrix(U[level]) - i = torch.LongTensor(np.array([d.row, d.col])) - v = torch.FloatTensor(d.data) - ptU.append(torch.sparse.FloatTensor(i, v, d.shape)) - - # upsampling mapping from 431 points to 6890 points - # ptU[0].to_dense() - Size: [6890, 1723] - # ptU[1].to_dense() - Size: [1723, 431] - Umap = torch.matmul(ptU[0].to_dense(), ptU[1].to_dense()) # 431 -> 6890 - self.register_buffer('Umap', Umap) - - def reduce_dim(self, feature): - ''' - Dimension reduction by multi-layer perceptrons - :param feature: list of [B, C_s, N] point-wise features before dimension reduction - :return: [B, C_p x N] concatantion of point-wise features after dimension reduction - ''' - y = feature - tmpy = feature - for i, f in enumerate(self.filters): - y = self._modules['conv' + str(i)](y if i == 0 else torch.cat([y, tmpy], 1)) - if i != len(self.filters) - 1: - y = F.leaky_relu(y) - if self.num_views > 1 and i == len(self.filters) // 2: - y = y.view(-1, self.num_views, y.shape[1], y.shape[2]).mean(dim=1) - tmpy = feature.view(-1, self.num_views, feature.shape[1], - feature.shape[2]).mean(dim=1) - - y = self.last_op(y) - - # y = y.view(y.shape[0], -1) - - return y - - def sampling(self, points, im_feat=None, z_feat=None, add_att=False, reduce_dim=True): - ''' - Given 2D points, sample the point-wise features for each point, - the dimension of point-wise features will be reduced from C_s to C_p by MLP. - Image features should be pre-computed before this call. - :param points: [B, N, 2] image coordinates of points - :im_feat: [B, C_s, H_s, W_s] spatial feature maps - :return: [B, C_p x N] concatantion of point-wise features after dimension reduction - ''' - # if im_feat is None: - # im_feat = self.im_feat - - batch_size = im_feat.shape[0] - point_feat = torch.nn.functional.grid_sample( - im_feat, points.unsqueeze(2), align_corners=False - )[..., 0] - - if reduce_dim: - mesh_align_feat = self.reduce_dim(point_feat) - return mesh_align_feat - else: - return point_feat - - def forward(self, p, im_feat, cam=None, add_att=False, reduce_dim=True, **kwargs): - ''' Returns mesh-aligned features for the 3D mesh points. - Args: - p (tensor): [B, N_m, 3] mesh vertices - im_feat (tensor): [B, C_s, H_s, W_s] spatial feature maps - cam (tensor): [B, 3] camera - Return: - mesh_align_feat (tensor): [B, C_p x N_m] mesh-aligned features - ''' - # if cam is None: - # cam = self.cam - p_proj_2d = projection(p, cam, retain_z=False, iwp_mode=self.iwp_cam_mode) - if self.iwp_cam_mode: - # Normalize keypoints to [-1,1] - p_proj_2d = p_proj_2d / (224. / 2.) - else: - p_proj_2d = j2d_processing(p_proj_2d, cam['kps_transf']) - mesh_align_feat = self.sampling(p_proj_2d, im_feat, add_att=add_att, reduce_dim=reduce_dim) - return mesh_align_feat diff --git a/spaces/abdvl/datahub_qa_bot/docs/rfc.md b/spaces/abdvl/datahub_qa_bot/docs/rfc.md deleted file mode 100644 index 92578b76aa643f7488bc33d568f3223f16a6d291..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/rfc.md +++ /dev/null @@ -1,123 +0,0 @@ -# DataHub RFC Process - -## What is an RFC? - -The "RFC" (request for comments) process is intended to provide a consistent and controlled path for new features, -significant modifications, or any other significant proposal to enter DataHub and its related frameworks. - -Many changes, including bug fixes and documentation improvements can be implemented and reviewed via the normal GitHub -pull request workflow. - -Some changes though are "substantial", and we ask that these be put through a bit of a design process and produce a -consensus among the DataHub core teams. - -## The RFC life-cycle - -An RFC goes through the following stages: - -- *Discussion* (Optional): Create an issue with the "RFC" label to have a more open ended, initial discussion around -your proposal (useful if you don't have a concrete proposal yet). Consider posting to #rfc in [Slack](./slack.md) -for more visibility. -- *Pending*: when the RFC is submitted as a PR. Please add the "RFC" label to the PR. -- *Active*: when an RFC PR is merged and undergoing implementation. -- *Landed*: when an RFC's proposed changes are shipped in an actual release. -- *Rejected*: when an RFC PR is closed without being merged. - -[Pending RFC List](https://github.com/datahub-project/rfcs/pulls?q=is%3Apr+is%3Aopen) - -## When to follow this process - -You need to follow this process if you intend to make "substantial" changes to any components in the DataHub git repo, -their documentation, or any other projects under the purview of the DataHub core teams. What constitutes a "substantial" -change is evolving based on community norms, but may include the following: - -- A new feature that creates new API surface area, and would require a feature flag if introduced. -- The removal of features that already shipped as part of the release channel. -- The introduction of new idiomatic usage or conventions, even if they do not include code changes to DataHub itself. - -Some changes do not require an RFC: - -- Rephrasing, reorganizing or refactoring -- Addition or removal of warnings -- Additions that strictly improve objective, numerical quality criteria (speedup) - -If you submit a pull request to implement a new, major feature without going through the RFC process, it may be closed -with a polite request to submit an RFC first. - -## Gathering feedback before submitting - -It's often helpful to get feedback on your concept before diving into the level of API design detail required for an -RFC. You may open an issue on this repo to start a high-level discussion, with the goal of eventually formulating an RFC -pull request with the specific implementation design. We also highly recommend sharing drafts of RFCs in #rfc on the -[DataHub Slack](./slack.md) for early feedback. - -## The process - -In short, to get a major feature added to DataHub, one must first get the RFC merged into the RFC repo as a markdown -file. At that point the RFC is 'active' and may be implemented with the goal of eventual inclusion into DataHub. - -- Fork the [datahub-project/rfc repository](https://github.com/datahub-project/rfcs). -- Copy the `000-template.md` template file to `rfc/active/000-my-feature.md`, where `my-feature` is more -descriptive. Don't assign an RFC number yet. -- Fill in the RFC. Put care into the details. *RFCs that do not present convincing motivation, demonstrate understanding -of the impact of the design, or are disingenuous about the drawback or alternatives tend to be poorly-received.* -- Submit a pull request. As a pull request the RFC will receive design feedback from the larger community, and the -author should be prepared to revise it in response. -- Update the pull request to add the number of the PR to the filename and add a link to the PR in the header of the RFC. -- Build consensus and integrate feedback. RFCs that have broad support are much more likely to make progress than those -that don't receive any comments. -- Eventually, the DataHub team will decide whether the RFC is a candidate for inclusion. -- RFCs that are candidates for inclusion will entire a "final comment period" lasting 7 days. The beginning of this -period will be signaled with a comment and tag on the pull request. Furthermore, an announcement will be made in the -\#rfc Slack channel for further visibility. -- An RFC acan be modified based upon feedback from the DataHub team and community. Significant modifications may trigger -a new final comment period. -- An RFC may be rejected by the DataHub team after public discussion has settled and comments have been made summarizing -the rationale for rejection. The RFC will enter a "final comment period to close" lasting 7 days. At the end of the "FCP -to close" period, the PR will be closed. -- An RFC author may withdraw their own RFC by closing it themselves. Please state the reason for the withdrawal. -- An RFC may be accepted at the close of its final comment period. A DataHub team member will merge the RFC's associated -pull request, at which point the RFC will become 'active'. - - -## Details on Active RFCs - -Once an RFC becomes active then authors may implement it and submit the feature as a pull request to the DataHub repo. -Becoming 'active' is not a rubber stamp, and in particular still does not mean the feature will ultimately be merged; it -does mean that the core team has agreed to it in principle and are amenable to merging it. - -Furthermore, the fact that a given RFC has been accepted and is 'active' implies nothing about what priority is assigned -to its implementation, nor whether anybody is currently working on it. - -Modifications to active RFC's can be done in followup PR's. We strive to write each RFC in a manner that it will reflect -the final design of the feature; but the nature of the process means that we cannot expect every merged RFC to actually -reflect what the end result will be at the time of the next major release; therefore we try to keep each RFC document -somewhat in sync with the language feature as planned, tracking such changes via followup pull requests to the document. - -## Implementing an RFC - -The author of an RFC is not obligated to implement it. Of course, the RFC author (like any other developer) is welcome -to post an implementation for review after the RFC has been accepted. - -An active RFC should have the link to the implementation PR(s) listed, if there are any. Feedback to the actual -implementation should be conducted in the implementation PR instead of the original RFC PR. - -If you are interested in working on the implementation for an 'active' RFC, but cannot determine if someone else is -already working on it, feel free to ask (e.g. by leaving a comment on the associated issue). - -## Implemented RFCs - -Once an RFC has finally be implemented, first off, congratulations! And thank you for your contribution! Second, to -help track the status of the RFC, please make one final PR to move the RFC from `rfc/active` to -`rfc/finished`. - -## Reviewing RFCs - -Most of the DataHub team will attempt to review some set of open RFC pull requests on a regular basis. If a DataHub -team member believes an RFC PR is ready to be accepted into active status, they can approve the PR using GitHub's -review feature to signal their approval of the RFCs. - - - -*DataHub's RFC process is inspired by many others, including [Vue.js](https://github.com/vuejs/rfcs) and -[Ember](https://github.com/emberjs/rfcs).* diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/samplers/distributed_sampler.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/samplers/distributed_sampler.py deleted file mode 100644 index cc61019484655ee2829f7908dc442caa20cf1d54..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/samplers/distributed_sampler.py +++ /dev/null @@ -1,39 +0,0 @@ -import math - -import torch -from torch.utils.data import DistributedSampler as _DistributedSampler - - -class DistributedSampler(_DistributedSampler): - - def __init__(self, - dataset, - num_replicas=None, - rank=None, - shuffle=True, - seed=0): - super().__init__( - dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - # for the compatibility from PyTorch 1.3+ - self.seed = seed if seed is not None else 0 - - def __iter__(self): - # deterministically shuffle based on epoch - if self.shuffle: - g = torch.Generator() - g.manual_seed(self.epoch + self.seed) - indices = torch.randperm(len(self.dataset), generator=g).tolist() - else: - indices = torch.arange(len(self.dataset)).tolist() - - # add extra samples to make it evenly divisible - # in case that indices is shorter than half of total_size - indices = (indices * - math.ceil(self.total_size / len(indices)))[:self.total_size] - assert len(indices) == self.total_size - - # subsample - indices = indices[self.rank:self.total_size:self.num_replicas] - assert len(indices) == self.num_samples - - return iter(indices) diff --git a/spaces/ai-guru/composer/app.py b/spaces/ai-guru/composer/app.py deleted file mode 100644 index b9f2fdab0bf04b9e15860afcd531fdbef94494c0..0000000000000000000000000000000000000000 --- a/spaces/ai-guru/composer/app.py +++ /dev/null @@ -1,276 +0,0 @@ -# Copyright 2022 Tristan Behrens. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Lint as: python3 - -from fastapi import BackgroundTasks, FastAPI -from fastapi.staticfiles import StaticFiles -from fastapi.responses import FileResponse -from pydantic import BaseModel -from PIL import Image -import os -import io -import random -import base64 -from time import time -from statistics import mean -from collections import OrderedDict -import torch -import wave -from source.logging import create_logger -from source.tokensequence import token_sequence_to_audio, token_sequence_to_image -from source import constants -from transformers import AutoTokenizer, AutoModelForCausalLM - -logger = create_logger(__name__) - -# Load the auth-token from authtoken.txt. -auth_token = os.getenv("authtoken") - -# Loading the model and its tokenizer. -logger.info("Loading tokenizer and model...") -tokenizer = AutoTokenizer.from_pretrained( - "ai-guru/lakhclean_mmmtrack_4bars_d-2048" -) -model = AutoModelForCausalLM.from_pretrained( - "ai-guru/lakhclean_mmmtrack_4bars_d-2048" -) -logger.info("Done.") - - -# Create the app -logger.info("Creating app...") -app = FastAPI(docs_url=None, redoc_url=None) -app.mount("/static", StaticFiles(directory="static"), name="static") -logger.info("Done.") - - -class Options(BaseModel): - music_style: str - density: str - temperature: str - - -class NewTask(BaseModel): - music_style = "synth" - density = "medium" - temperature = "medium" - - -def get_place_in_queue(task_id): - queued_tasks = list( - task - for task in tasks.values() - if task["status"] == "queued" or task["status"] == "processing" - ) - - queued_tasks.sort(key=lambda task: task["created_at"]) - - queued_task_ids = list(task["task_id"] for task in queued_tasks) - - try: - return queued_task_ids.index(task_id) + 1 - except: - return 0 - - -def calculate_eta(task_id): - total_durations = list( - task["completed_at"] - task["started_at"] - for task in tasks.values() - if "completed_at" in task and task["status"] == "completed" - ) - - initial_place_in_queue = tasks[task_id]["initial_place_in_queue"] - - if len(total_durations): - eta = initial_place_in_queue * mean(total_durations) - else: - eta = initial_place_in_queue * 35 - - return round(eta, 1) - - -def next_task(task_id): - tasks[task_id]["completed_at"] = time() - - queued_tasks = list(task for task in tasks.values() if task["status"] == "queued") - - if queued_tasks: - print( - f"{task_id} {tasks[task_id]['status']}. Task/s remaining: {len(queued_tasks)}" - ) - process_task(queued_tasks[0]["task_id"]) - - -def process_task(task_id): - if "processing" in list(task["status"] for task in tasks.values()): - return - - if tasks[task_id]["last_poll"] and time() - tasks[task_id]["last_poll"] > 30: - tasks[task_id]["status"] = "abandoned" - next_task(task_id) - - tasks[task_id]["status"] = "processing" - tasks[task_id]["started_at"] = time() - print(f"Processing {task_id}") - - try: - tasks[task_id]["output"] = compose( - tasks[task_id]["music_style"], - tasks[task_id]["density"], - tasks[task_id]["temperature"], - ) - except Exception as ex: - tasks[task_id]["status"] = "failed" - tasks[task_id]["error"] = repr(ex) - else: - tasks[task_id]["status"] = "completed" - finally: - next_task(task_id) - - -def compose(music_style, density, temperature): - instruments = constants.get_instruments(music_style) - density = constants.get_density(density) - temperature = constants.get_temperature(temperature) - print(f"instruments: {instruments} density: {density} temperature: {temperature}") - - # Generate with the given parameters. - logger.info(f"Generating token sequence...") - generated_sequence = generate_sequence(instruments, density, temperature) - logger.info(f"Generated token sequence: {generated_sequence}") - - # Get the audio data as a array of int16. - logger.info("Generating audio...") - sample_rate, audio_data = token_sequence_to_audio(generated_sequence) - logger.info(f"Done. Audio data: {len(audio_data)}") - - # Encode the audio-data as wave file in memory. Use the wave module. - audio_data_bytes = io.BytesIO() - wave_file = wave.open(audio_data_bytes, "wb") - wave_file.setframerate(sample_rate) - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.writeframes(audio_data) - wave_file.close() - - # Return the audio-data as a base64-encoded string. - audio_data_bytes.seek(0) - audio_data_base64 = base64.b64encode(audio_data_bytes.read()).decode("utf-8") - audio_data_bytes.close() - - # Convert the audio data to an PIL image. - image = token_sequence_to_image(generated_sequence) - - # Save PIL image to harddrive as PNG. - logger.debug(f"Saving image to harddrive... {type(image)}") - image_file_name = "compose.png" - image.save(image_file_name, "PNG") - - # Save image to virtual file. - img_io = io.BytesIO() - image.save(img_io, "PNG", quality=70) - img_io.seek(0) - - # Return the image as a base64-encoded string. - image_data_base64 = base64.b64encode(img_io.read()).decode("utf-8") - img_io.close() - - # Return. - return { - "tokens": generated_sequence, - "audio": "data:audio/wav;base64," + audio_data_base64, - "image": "data:image/png;base64," + image_data_base64, - "status": "OK", - } - - -def generate_sequence(instruments, density, temperature): - instruments = instruments[::] - random.shuffle(instruments) - - generated_ids = tokenizer.encode("PIECE_START", return_tensors="pt")[0] - - for instrument in instruments: - more_ids = tokenizer.encode( - f"TRACK_START INST={instrument} DENSITY={density}", return_tensors="pt" - )[0] - generated_ids = torch.cat((generated_ids, more_ids)) - generated_ids = generated_ids.unsqueeze(0) - - generated_ids = model.generate( - generated_ids, - max_length=2048, - do_sample=True, - temperature=temperature, - eos_token_id=tokenizer.encode("TRACK_END")[0], - )[0] - - generated_sequence = tokenizer.decode(generated_ids) - print("GENERATING COMPLETE") - print(generate_sequence) - return generated_sequence - - -tasks = OrderedDict() - -# Route for the loading page. -@app.head("/") -@app.route("/") -def index(request): - return FileResponse(path="static/index.html", media_type="text/html") - - -@app.post("/task/create") -def create_task(background_tasks: BackgroundTasks, new_task: NewTask): - created_at = time() - - task_id = f"{str(created_at)}_{new_task.music_style}" - - tasks[task_id] = OrderedDict( - { - "task_id": task_id, - "status": "queued", - "eta": None, - "created_at": created_at, - "started_at": None, - "completed_at": None, - "last_poll": None, - "poll_count": 0, - "initial_place_in_queue": None, - "place_in_queue": None, - "music_style": new_task.music_style, - "density": new_task.density, - "temperature": new_task.temperature, - "output": None, - } - ) - - tasks[task_id]["initial_place_in_queue"] = get_place_in_queue(task_id) - tasks[task_id]["eta"] = calculate_eta(task_id) - - background_tasks.add_task(process_task, task_id) - - return tasks[task_id] - - -@app.get("/task/poll") -def poll_task(task_id: str): - tasks[task_id]["place_in_queue"] = get_place_in_queue(task_id) - tasks[task_id]["eta"] = calculate_eta(task_id) - tasks[task_id]["last_poll"] = time() - tasks[task_id]["poll_count"] += 1 - - return tasks[task_id] diff --git a/spaces/akhaliq/MT3/README.md b/spaces/akhaliq/MT3/README.md deleted file mode 100644 index 19c9c8b4542f945e1cc5e9d4c7768e60616c78c7..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/MT3/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: MT3 -emoji: 🦀 -colorFrom: red -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akshatsanghvi/Rice-Disease-Classifier/app.py b/spaces/akshatsanghvi/Rice-Disease-Classifier/app.py deleted file mode 100644 index 40210dfd782ee5a697501a409445380485e398db..0000000000000000000000000000000000000000 --- a/spaces/akshatsanghvi/Rice-Disease-Classifier/app.py +++ /dev/null @@ -1,67 +0,0 @@ -import streamlit as st -from tensorflow import image -from keras import models -import numpy as np -from PIL import Image -import pandas as pd - -st.title("Rice Disease Classifier 🌾") - -desc = pd.read_csv("files/description.csv") -model = models.load_model("models/0.3/model.h5") - -dis = list(desc.disease.values) - -def image_classifier(inp): - try: - inp = image.resize(inp, (256,256)) - inp = np.expand_dims(inp,0) - pred= model.predict(inp) - return dis[np.argmax(pred)] , f"Confidence - {round(max(pred[0])*100,2)}%" - except: - return "Healthy", "Confidence - 0%" - -def detail(pro): - x = desc[desc["disease"]==pro] - return list(x["hindi"])[0], list(x["desc"])[0], list(x["hndesc"])[0], list(x["pre"])[0], list(x["hnpre"])[0] - - -cho = st.file_uploader("Upload Image From Gallery", type=['png','jpg','jpeg','webp']) -img = "" - -if cho is not None: - img = Image.open(cho) - -st.write("or") -if st.button("Open Camera"): - cam = st.camera_input("Take image") - if cam is not None: - img = Image.open(cam) - - -if st.button("Detect"): - col1,col2,col3 = st.columns(3) - pro, conf = image_classifier(img) - hin, des, hnd, pre, hnp = detail(pro) - try: - with col2: - st.image(img) - st.write("\n\n") - st.header(pro) - st.subheader(f"({hin})") - st.subheader(conf) - st.write("\n\n\n\n") - - st.subheader(f"Description :") - st.write(des) - st.write("\n\n") - st.write(hnd) - st.write("\n\n\n") - - st.subheader(f"Precautions :") - st.write(pre) - st.write("\n\n") - st.write(hnp) - except: - with col2: - st.subheader(":red[Enter Valid Input]") diff --git a/spaces/alamin655/websurfx/public/templates/bar.html b/spaces/alamin655/websurfx/public/templates/bar.html deleted file mode 100644 index 489b0756609e5d5bfc2ef0a8904eb19e740de996..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/public/templates/bar.html +++ /dev/null @@ -1,3 +0,0 @@ - - - - - - -
      0th instance:
      - -
      -
      -
      - -
      -
      - Target Saliency Heatmap -
      - x: Generated tokens, y: Attributed tokens -
      - - - -
      ▁Er▁ist▁Hochzeitsplaner.</s>
      ▁Er0.6730.0170.010.0470.026-0.25
      ▁ist-0.1010.0210.050.0340.134
      ▁Hochzeit0.2310.123-0.0070.47
      splan0.614-0.011-0.182
      er0.033-0.321
      .0.056
      </s>
      -
      - -
      -
      -
      - diff --git a/spaces/p-baleine/metaanalyser/examples/Pitman-Yor Language Model.md b/spaces/p-baleine/metaanalyser/examples/Pitman-Yor Language Model.md deleted file mode 100644 index 35c3708774fab8d681a520222648f02fba93072b..0000000000000000000000000000000000000000 --- a/spaces/p-baleine/metaanalyser/examples/Pitman-Yor Language Model.md +++ /dev/null @@ -1,66 +0,0 @@ -# A Systematic Review of Pitman-Yor Language Model - -This systematic review provides an overview of the Pitman-Yor Language Model, a probabilistic model for natural language processing. We discuss the historical background of the model, including the introduction of the Pitman Yor Diffusion Tree (PYDT) for hierarchical clustering. We also explore potential future developments, such as its applications in nonparametric clustering of data and generative transition-based dependency parsing. - -## Table of contents - -1. Introduction: This section provides an overview of the Pitman-Yor Language Model, a probabilistic model for natural language processing. -2. Historical Background: This section discusses the historical background of the Pitman-Yor Language Model, including the introduction of the Pitman Yor Diffusion Tree (PYDT) for hierarchical clustering. - 1. Pitman Yor Diffusion Tree: This subsection discusses the introduction of the Pitman Yor Diffusion Tree (PYDT) for hierarchical clustering. -3. Future Development: This section explores potential future developments of the Pitman-Yor Language Model, such as its applications in nonparametric clustering of data and generative transition-based dependency parsing. - 1. Nonparametric Clustering of Data: This subsection discusses the potential application of the Pitman-Yor Language Model in nonparametric clustering of data. - 2. Generative Transition-Based Dependency Parsing: This subsection discusses the potential application of the Pitman-Yor Language Model in generative transition-based dependency parsing. -4. Conclusion: This systematic review provides an overview of the Pitman-Yor Language Model, its historical background, and potential future developments. - -## Introduction - -This section provides an overview of the Pitman-Yor Language Model, a probabilistic model for natural language processing. According to [^1], the Pitman Yor Diffusion Tree (PYDT) is a generalization of the Dirichlet Diffusion Tree, which removes the restriction to binary branching structure. The generative process is described and shown to result in an exchangeable distribution over data points. The model has been proven to have some theoretical properties, and two inference methods have been presented: a collapsed MCMC sampler which allows us to model uncertainty over tree structures, and a computationally efficient greedy Bayesian EM search algorithm. Both algorithms use message passing on the tree structure. The utility of the model and algorithms is demonstrated on synthetic and real-world data, both continuous and binary. - -## Historical Background - -The Pitman-Yor Language Model is a probabilistic model for natural language processing. The historical background of the model includes the introduction of the Pitman Yor Diffusion Tree (PYDT) for hierarchical clustering [^1]. The PYDT is a generalization of the Dirichlet Diffusion Tree, which removes the restriction to binary branching structure. The generative process of the PYDT is described and shown to result in an exchangeable distribution over data points. The model has some theoretical properties, and two inference methods have been presented: a collapsed MCMC sampler, which allows modeling uncertainty over tree structures, and a computationally efficient greedy Bayesian EM search algorithm. Both algorithms use message passing on the tree structure. The utility of the model and algorithms has been demonstrated on synthetic and real-world data, both continuous and binary. The PYDT has been used to learn hierarchical structure over latent variables in models including Hidden Markov Models and Latent Dirichlet Allocation [^1]. - -### Pitman Yor Diffusion Tree - -The Pitman Yor Diffusion Tree (PYDT) is a generalization of the Dirichlet Diffusion Tree (DDT) for hierarchical clustering, which removes the restriction to binary branching structure [^1][^4]. The generative process of PYDT results in an exchangeable distribution over data points, and some theoretical properties of the model have been proven [^1]. Two inference methods have been presented: a collapsed MCMC sampler that models uncertainty over tree structures, and a computationally efficient greedy Bayesian EM search algorithm that uses message passing on the tree structure [^1]. The utility of the model and algorithms has been demonstrated on synthetic and real-world data, both continuous and binary [^1]. The PYDT can find simpler, more interpretable representations of data than the DDT, and it defines an infinitely exchangeable distribution over data points [^1][^7]. The code for PYDT is publicly available to encourage its use by the community [^1]. - -[^1]: Knowles, D. A., & Ghahramani, Z. (2011). Pitman-Yor diffusion trees. arXiv preprint arXiv:1107.2402. -[^4]: Knowles, D. A., & Ghahramani, Z. (2012). Nonparametric Bayesian sparse factor models with application to gene expression modeling. The Annals of Applied Statistics, 6(2), 563-588. -[^7]: Knowles, D. A., & Ghahramani, Z. (2012). Hierarchical clustering using the Pitman-Yor process tree. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(7), 1399-1413. - -## Future Development - -The Pitman-Yor Language Model has potential future developments in nonparametric clustering of data and generative transition-based dependency parsing. The kernel Pitman-Yor process (KPYP) has been proposed for nonparametric clustering of data with general spatial or temporal interdependencies. The KPYP is constructed by introducing an infinite sequence of random locations and defining a predictor-dependent random probability measure based on the stick-breaking construction of the Pitman-Yor process. The discount hyperparameters of the Beta-distributed random weights of the process are controlled by a kernel function expressing the proximity between the location assigned to each weight and the given predictors [^5][^6]. - -Moreover, a generative model for transition-based dependency parsing has been proposed, parameterized by Hierarchical Pitman-Yor Processes (HPYPs). The model learns a distribution over derivations of parser transitions, words, and POS tags. To enable efficient inference, a novel algorithm for linear-time decoding in a generative transition-based parser has been proposed based on particle filtering, a method for sequential Monte Carlo sampling. This method enables the beam-size during decoding to depend on the uncertainty of the model. The model has high accuracy and obtains better perplexity than an n-gram model by performing semi-supervised learning over a large unlabelled corpus. The model is also able to generate locally and syntactically coherent sentences, opening the door to further applications in language generation [^8][^9]. - -### Nonparametric Clustering of Data - -The Pitman-Yor Language Model has potential applications in nonparametric clustering of data. In particular, the kernel Pitman-Yor process (KPYP) has been proposed for nonparametric clustering of data with general spatial or temporal interdependencies. The KPYP is constructed by introducing an infinite sequence of random locations and defining a predictor-dependent random probability measure based on the stick-breaking construction of the Pitman-Yor process. The discount hyperparameters of the Beta-distributed random weights (stick variables) of the process are controlled by a kernel function expressing the proximity between the location assigned to each weight and the given predictors [^5][^6]. The performance of the KPYP prior has been studied in unsupervised image segmentation and text-dependent speaker identification, and compared to the kernel stick-breaking process and the Dirichlet process prior [^5][^6]. - -Overall, the Pitman-Yor Language Model has the potential to be a useful tool for nonparametric clustering of data, particularly when dealing with spatial or temporal interdependencies. - -### Generative Transition-Based Dependency Parsing - -The Pitman-Yor Language Model has potential applications in generative transition-based dependency parsing. A simple, scalable, fully generative model for transition-based dependency parsing with high accuracy has been proposed, which is parameterized by Hierarchical Pitman-Yor Processes [^8]. The model learns a distribution over derivations of parser transitions, words, and POS tags. To enable efficient inference, a novel algorithm for linear-time decoding in a generative transition-based parser has been proposed, which is based on particle filtering [^8]. The algorithm enables the beam-size during decoding to depend on the uncertainty of the model. The model is able to generate locally and syntactically coherent sentences, opening the door to further applications in language generation [^8]. - -## Conclusion - -This systematic review provides an overview of the Pitman-Yor Language Model, its historical background, and potential future developments. The Pitman Yor Diffusion Tree (PYDT) was introduced as a generalization of the Dirichlet Diffusion Tree for hierarchical clustering [^1]. The model has shown promising results in nonparametric clustering of data [^5][^6] and generative transition-based dependency parsing [^8]. The Pitman-Yor process has also been characterized for its heavy-tailed mixture models [^3] and estimation of its type parameter by empirical and full Bayes methods [^4]. While the model has been evaluated with perplexity, other approaches have been proposed to evaluate the success or failure of the model [^2]. Overall, the Pitman-Yor Language Model has shown potential in various applications and can be further developed to improve its performance in natural language processing tasks. - -## References -[^1]: [Knowles, David A., and Zoubin Ghahramani. "Pitman-Yor diffusion trees." arXiv preprint arXiv:1106.2494 (2011).](https://arxiv.org/abs/1106.2494) - -[^2]: [Takahashi, Shuntaro, and Kumiko Tanaka-Ishii. "Assessing language models with scaling properties." arXiv preprint arXiv:1804.08881 (2018).](https://arxiv.org/abs/1804.08881) - -[^3]: [Ramirez, Vianey Palacios, Miguel de Carvalho, and Luis Gutierrez Inostroza. "Heavy-Tailed Pitman--Yor Mixture Models." arXiv preprint arXiv:2211.00867 (2022).](https://arxiv.org/abs/2211.00867) - -[^4]: [Franssen, S. E. M. P., and A. W. van der Vaart. "Empirical and Full Bayes estimation of the type of a Pitman-Yor process." arXiv preprint arXiv:2208.14255 (2022).](https://arxiv.org/abs/2208.14255) - -[^5]: [Chatzis, Sotirios P., Dimitrios Korkinof, and Yiannis Demiris. "The Kernel Pitman-Yor Process." arXiv preprint arXiv:1210.4184 (2012).](https://arxiv.org/abs/1210.4184) - -[^6]: [Chatzis, Sotirios P., Dimitrios Korkinof, and Yiannis Demiris. "The Kernel Pitman-Yor Process." arXiv preprint arXiv:1210.4184 (2012).](https://arxiv.org/abs/1210.4184) - -[^7]: [Okita, Tsuyoshi. "Joint space neural probabilistic language model for statistical machine translation." arXiv preprint arXiv:1301.3614 (2013).](https://arxiv.org/abs/1301.3614) - -[^8]: [Buys, Jan, and Phil Blunsom. "A Bayesian model for generative transition-based dependency parsing." arXiv preprint arXiv:1506.04334 (2015).](https://arxiv.org/abs/1506.04334) \ No newline at end of file diff --git a/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/utils/autobatch.py b/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/utils/autobatch.py deleted file mode 100644 index e53b4787b87df5a46b1df0eb28d8d97bc1f811fd..0000000000000000000000000000000000000000 --- a/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/utils/autobatch.py +++ /dev/null @@ -1,58 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Auto-batch utils -""" - -from copy import deepcopy - -import numpy as np -import torch -from torch.cuda import amp - -from utils.general import LOGGER, colorstr -from utils.torch_utils import profile - - -def check_train_batch_size(model, imgsz=640): - # Check YOLOv5 training batch size - with amp.autocast(): - return autobatch(deepcopy(model).train(), imgsz) # compute optimal batch size - - -def autobatch(model, imgsz=640, fraction=0.9, batch_size=16): - # Automatically estimate best batch size to use `fraction` of available CUDA memory - # Usage: - # import torch - # from utils.autobatch import autobatch - # model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False) - # print(autobatch(model)) - - prefix = colorstr('AutoBatch: ') - LOGGER.info(f'{prefix}Computing optimal batch size for --imgsz {imgsz}') - device = next(model.parameters()).device # get model device - if device.type == 'cpu': - LOGGER.info(f'{prefix}CUDA not detected, using default CPU batch-size {batch_size}') - return batch_size - - gb = 1 << 30 # bytes to GiB (1024 ** 3) - d = str(device).upper() # 'CUDA:0' - properties = torch.cuda.get_device_properties(device) # device properties - t = properties.total_memory / gb # (GiB) - r = torch.cuda.memory_reserved(device) / gb # (GiB) - a = torch.cuda.memory_allocated(device) / gb # (GiB) - f = t - (r + a) # free inside reserved - LOGGER.info(f'{prefix}{d} ({properties.name}) {t:.2f}G total, {r:.2f}G reserved, {a:.2f}G allocated, {f:.2f}G free') - - batch_sizes = [1, 2, 4, 8, 16] - try: - img = [torch.zeros(b, 3, imgsz, imgsz) for b in batch_sizes] - y = profile(img, model, n=3, device=device) - except Exception as e: - LOGGER.warning(f'{prefix}{e}') - - y = [x[2] for x in y if x] # memory [2] - batch_sizes = batch_sizes[:len(y)] - p = np.polyfit(batch_sizes, y, deg=1) # first degree polynomial fit - b = int((f * fraction - p[1]) / p[0]) # y intercept (optimal batch size) - LOGGER.info(f'{prefix}Using batch-size {b} for {d} {t * fraction:.2f}G/{t:.2f}G ({fraction * 100:.0f}%)') - return b diff --git a/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/utils/torch_utils_torchscript.py b/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/utils/torch_utils_torchscript.py deleted file mode 100644 index ea9e9fbf5740f7da83f6db90fce6d4db4b3f743f..0000000000000000000000000000000000000000 --- a/spaces/phyloforfun/VoucherVision/vouchervision/component_detector/utils/torch_utils_torchscript.py +++ /dev/null @@ -1,432 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license -""" -PyTorch utils -""" - -import math -import os -import platform -import subprocess -import time -import warnings -from contextlib import contextmanager -from copy import deepcopy -from pathlib import Path - -import torch -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.parallel import DistributedDataParallel as DDP - -from utils.general_torchscript import LOGGER, check_version, colorstr, file_date, git_describe - -LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html -RANK = int(os.getenv('RANK', -1)) -WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) - -try: - import thop # for FLOPs computation -except ImportError: - thop = None - -# Suppress PyTorch warnings -warnings.filterwarnings('ignore', message='User provided device_type of \'cuda\', but CUDA is not available. Disabling') -warnings.filterwarnings('ignore', category=UserWarning) - - -def smart_inference_mode(torch_1_9=check_version(torch.__version__, '1.9.0')): - # Applies torch.inference_mode() decorator if torch>=1.9.0 else torch.no_grad() decorator - def decorate(fn): - return (torch.inference_mode if torch_1_9 else torch.no_grad)()(fn) - - return decorate - - -def smartCrossEntropyLoss(label_smoothing=0.0): - # Returns nn.CrossEntropyLoss with label smoothing enabled for torch>=1.10.0 - if check_version(torch.__version__, '1.10.0'): - return nn.CrossEntropyLoss(label_smoothing=label_smoothing) - if label_smoothing > 0: - LOGGER.warning(f'WARNING ⚠️ label smoothing {label_smoothing} requires torch>=1.10.0') - return nn.CrossEntropyLoss() - - -def smart_DDP(model): - # Model DDP creation with checks - assert not check_version(torch.__version__, '1.12.0', pinned=True), \ - 'torch==1.12.0 torchvision==0.13.0 DDP training is not supported due to a known issue. ' \ - 'Please upgrade or downgrade torch to use DDP. See https://github.com/ultralytics/yolov5/issues/8395' - if check_version(torch.__version__, '1.11.0'): - return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK, static_graph=True) - else: - return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK) - - -def reshape_classifier_output(model, n=1000): - # Update a TorchVision classification model to class count 'n' if required - from models.common import Classify - name, m = list((model.model if hasattr(model, 'model') else model).named_children())[-1] # last module - if isinstance(m, Classify): # YOLOv5 Classify() head - if m.linear.out_features != n: - m.linear = nn.Linear(m.linear.in_features, n) - elif isinstance(m, nn.Linear): # ResNet, EfficientNet - if m.out_features != n: - setattr(model, name, nn.Linear(m.in_features, n)) - elif isinstance(m, nn.Sequential): - types = [type(x) for x in m] - if nn.Linear in types: - i = types.index(nn.Linear) # nn.Linear index - if m[i].out_features != n: - m[i] = nn.Linear(m[i].in_features, n) - elif nn.Conv2d in types: - i = types.index(nn.Conv2d) # nn.Conv2d index - if m[i].out_channels != n: - m[i] = nn.Conv2d(m[i].in_channels, n, m[i].kernel_size, m[i].stride, bias=m[i].bias is not None) - - -@contextmanager -def torch_distributed_zero_first(local_rank: int): - # Decorator to make all processes in distributed training wait for each local_master to do something - if local_rank not in [-1, 0]: - dist.barrier(device_ids=[local_rank]) - yield - if local_rank == 0: - dist.barrier(device_ids=[0]) - - -def device_count(): - # Returns number of CUDA devices available. Safe version of torch.cuda.device_count(). Supports Linux and Windows - assert platform.system() in ('Linux', 'Windows'), 'device_count() only supported on Linux or Windows' - try: - cmd = 'nvidia-smi -L | wc -l' if platform.system() == 'Linux' else 'nvidia-smi -L | find /c /v ""' # Windows - return int(subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1]) - except Exception: - return 0 - - -def select_device(device='', batch_size=0, newline=True): - # device = None or 'cpu' or 0 or '0' or '0,1,2,3' - s = f'YOLOv5 🚀 {git_describe() or file_date()} Python-{platform.python_version()} torch-{torch.__version__} ' - device = str(device).strip().lower().replace('cuda:', '').replace('none', '') # to string, 'cuda:0' to '0' - cpu = device == 'cpu' - mps = device == 'mps' # Apple Metal Performance Shaders (MPS) - if cpu or mps: - os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False - elif device: # non-cpu device requested - os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable - must be before assert is_available() - assert torch.cuda.is_available() and torch.cuda.device_count() >= len(device.replace(',', '')), \ - f"Invalid CUDA '--device {device}' requested, use '--device cpu' or pass valid CUDA device(s)" - - if not cpu and not mps and torch.cuda.is_available(): # prefer GPU if available - devices = device.split(',') if device else '0' # range(torch.cuda.device_count()) # i.e. 0,1,6,7 - n = len(devices) # device count - if n > 1 and batch_size > 0: # check batch_size is divisible by device_count - assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}' - space = ' ' * (len(s) + 1) - for i, d in enumerate(devices): - p = torch.cuda.get_device_properties(i) - s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / (1 << 20):.0f}MiB)\n" # bytes to MB - arg = 'cuda:0' - elif mps and getattr(torch, 'has_mps', False) and torch.backends.mps.is_available(): # prefer MPS if available - s += 'MPS\n' - arg = 'mps' - else: # revert to CPU - s += 'CPU\n' - arg = 'cpu' - - if not newline: - s = s.rstrip() - LOGGER.info(s) - return torch.device(arg) - - -def time_sync(): - # PyTorch-accurate time - if torch.cuda.is_available(): - torch.cuda.synchronize() - return time.time() - - -def profile(input, ops, n=10, device=None): - """ YOLOv5 speed/memory/FLOPs profiler - Usage: - input = torch.randn(16, 3, 640, 640) - m1 = lambda x: x * torch.sigmoid(x) - m2 = nn.SiLU() - profile(input, [m1, m2], n=100) # profile over 100 iterations - """ - results = [] - if not isinstance(device, torch.device): - device = select_device(device) - print(f"{'Params':>12s}{'GFLOPs':>12s}{'GPU_mem (GB)':>14s}{'forward (ms)':>14s}{'backward (ms)':>14s}" - f"{'input':>24s}{'output':>24s}") - - for x in input if isinstance(input, list) else [input]: - x = x.to(device) - x.requires_grad = True - for m in ops if isinstance(ops, list) else [ops]: - m = m.to(device) if hasattr(m, 'to') else m # device - m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m - tf, tb, t = 0, 0, [0, 0, 0] # dt forward, backward - try: - flops = thop.profile(m, inputs=(x, ), verbose=False)[0] / 1E9 * 2 # GFLOPs - except Exception: - flops = 0 - - try: - for _ in range(n): - t[0] = time_sync() - y = m(x) - t[1] = time_sync() - try: - _ = (sum(yi.sum() for yi in y) if isinstance(y, list) else y).sum().backward() - t[2] = time_sync() - except Exception: # no backward method - # print(e) # for debug - t[2] = float('nan') - tf += (t[1] - t[0]) * 1000 / n # ms per op forward - tb += (t[2] - t[1]) * 1000 / n # ms per op backward - mem = torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0 # (GB) - s_in, s_out = (tuple(x.shape) if isinstance(x, torch.Tensor) else 'list' for x in (x, y)) # shapes - p = sum(x.numel() for x in m.parameters()) if isinstance(m, nn.Module) else 0 # parameters - print(f'{p:12}{flops:12.4g}{mem:>14.3f}{tf:14.4g}{tb:14.4g}{str(s_in):>24s}{str(s_out):>24s}') - results.append([p, flops, mem, tf, tb, s_in, s_out]) - except Exception as e: - print(e) - results.append(None) - torch.cuda.empty_cache() - return results - - -def is_parallel(model): - # Returns True if model is of type DP or DDP - return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel) - - -def de_parallel(model): - # De-parallelize a model: returns single-GPU model if model is of type DP or DDP - return model.module if is_parallel(model) else model - - -def initialize_weights(model): - for m in model.modules(): - t = type(m) - if t is nn.Conv2d: - pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif t is nn.BatchNorm2d: - m.eps = 1e-3 - m.momentum = 0.03 - elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]: - m.inplace = True - - -def find_modules(model, mclass=nn.Conv2d): - # Finds layer indices matching module class 'mclass' - return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)] - - -def sparsity(model): - # Return global model sparsity - a, b = 0, 0 - for p in model.parameters(): - a += p.numel() - b += (p == 0).sum() - return b / a - - -def prune(model, amount=0.3): - # Prune model to requested global sparsity - import torch.nn.utils.prune as prune - for name, m in model.named_modules(): - if isinstance(m, nn.Conv2d): - prune.l1_unstructured(m, name='weight', amount=amount) # prune - prune.remove(m, 'weight') # make permanent - LOGGER.info(f'Model pruned to {sparsity(model):.3g} global sparsity') - - -def fuse_conv_and_bn(conv, bn): - # Fuse Conv2d() and BatchNorm2d() layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ - fusedconv = nn.Conv2d(conv.in_channels, - conv.out_channels, - kernel_size=conv.kernel_size, - stride=conv.stride, - padding=conv.padding, - dilation=conv.dilation, - groups=conv.groups, - bias=True).requires_grad_(False).to(conv.weight.device) - - # Prepare filters - w_conv = conv.weight.clone().view(conv.out_channels, -1) - w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) - fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape)) - - # Prepare spatial bias - b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias - b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps)) - fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn) - - return fusedconv - - -def model_info(model, verbose=False, imgsz=640): - # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320] - n_p = sum(x.numel() for x in model.parameters()) # number parameters - n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients - if verbose: - print(f"{'layer':>5} {'name':>40} {'gradient':>9} {'parameters':>12} {'shape':>20} {'mu':>10} {'sigma':>10}") - for i, (name, p) in enumerate(model.named_parameters()): - name = name.replace('module_list.', '') - print('%5g %40s %9s %12g %20s %10.3g %10.3g' % - (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std())) - - try: # FLOPs - p = next(model.parameters()) - stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32 # max stride - im = torch.empty((1, p.shape[1], stride, stride), device=p.device) # input image in BCHW format - flops = thop.profile(deepcopy(model), inputs=(im, ), verbose=False)[0] / 1E9 * 2 # stride GFLOPs - imgsz = imgsz if isinstance(imgsz, list) else [imgsz, imgsz] # expand if int/float - fs = f', {flops * imgsz[0] / stride * imgsz[1] / stride:.1f} GFLOPs' # 640x640 GFLOPs - except Exception: - fs = '' - - name = Path(model.yaml_file).stem.replace('yolov5', 'YOLOv5') if hasattr(model, 'yaml_file') else 'Model' - LOGGER.info(f'{name} summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}') - - -def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416) - # Scales img(bs,3,y,x) by ratio constrained to gs-multiple - if ratio == 1.0: - return img - h, w = img.shape[2:] - s = (int(h * ratio), int(w * ratio)) # new size - img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize - if not same_shape: # pad/crop img - h, w = (math.ceil(x * ratio / gs) * gs for x in (h, w)) - return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean - - -def copy_attr(a, b, include=(), exclude=()): - # Copy attributes from b to a, options to only include [...] and to exclude [...] - for k, v in b.__dict__.items(): - if (len(include) and k not in include) or k.startswith('_') or k in exclude: - continue - else: - setattr(a, k, v) - - -def smart_optimizer(model, name='Adam', lr=0.001, momentum=0.9, decay=1e-5): - # YOLOv5 3-param group optimizer: 0) weights with decay, 1) weights no decay, 2) biases no decay - g = [], [], [] # optimizer parameter groups - bn = tuple(v for k, v in nn.__dict__.items() if 'Norm' in k) # normalization layers, i.e. BatchNorm2d() - for v in model.modules(): - for p_name, p in v.named_parameters(recurse=0): - if p_name == 'bias': # bias (no decay) - g[2].append(p) - elif p_name == 'weight' and isinstance(v, bn): # weight (no decay) - g[1].append(p) - else: - g[0].append(p) # weight (with decay) - - if name == 'Adam': - optimizer = torch.optim.Adam(g[2], lr=lr, betas=(momentum, 0.999)) # adjust beta1 to momentum - elif name == 'AdamW': - optimizer = torch.optim.AdamW(g[2], lr=lr, betas=(momentum, 0.999), weight_decay=0.0) - elif name == 'RMSProp': - optimizer = torch.optim.RMSprop(g[2], lr=lr, momentum=momentum) - elif name == 'SGD': - optimizer = torch.optim.SGD(g[2], lr=lr, momentum=momentum, nesterov=True) - else: - raise NotImplementedError(f'Optimizer {name} not implemented.') - - optimizer.add_param_group({'params': g[0], 'weight_decay': decay}) # add g0 with weight_decay - optimizer.add_param_group({'params': g[1], 'weight_decay': 0.0}) # add g1 (BatchNorm2d weights) - LOGGER.info(f"{colorstr('optimizer:')} {type(optimizer).__name__}(lr={lr}) with parameter groups " - f'{len(g[1])} weight(decay=0.0), {len(g[0])} weight(decay={decay}), {len(g[2])} bias') - return optimizer - - -def smart_hub_load(repo='ultralytics/yolov5', model='yolov5s', **kwargs): - # YOLOv5 torch.hub.load() wrapper with smart error/issue handling - if check_version(torch.__version__, '1.9.1'): - kwargs['skip_validation'] = True # validation causes GitHub API rate limit errors - if check_version(torch.__version__, '1.12.0'): - kwargs['trust_repo'] = True # argument required starting in torch 0.12 - try: - return torch.hub.load(repo, model, **kwargs) - except Exception: - return torch.hub.load(repo, model, force_reload=True, **kwargs) - - -def smart_resume(ckpt, optimizer, ema=None, weights='yolov5s.pt', epochs=300, resume=True): - # Resume training from a partially trained checkpoint - best_fitness = 0.0 - start_epoch = ckpt['epoch'] + 1 - if ckpt['optimizer'] is not None: - optimizer.load_state_dict(ckpt['optimizer']) # optimizer - best_fitness = ckpt['best_fitness'] - if ema and ckpt.get('ema'): - ema.ema.load_state_dict(ckpt['ema'].float().state_dict()) # EMA - ema.updates = ckpt['updates'] - if resume: - assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.\n' \ - f"Start a new training without --resume, i.e. 'python train.py --weights {weights}'" - LOGGER.info(f'Resuming training from {weights} from epoch {start_epoch} to {epochs} total epochs') - if epochs < start_epoch: - LOGGER.info(f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.") - epochs += ckpt['epoch'] # finetune additional epochs - return best_fitness, start_epoch, epochs - - -class EarlyStopping: - # YOLOv5 simple early stopper - def __init__(self, patience=30): - self.best_fitness = 0.0 # i.e. mAP - self.best_epoch = 0 - self.patience = patience or float('inf') # epochs to wait after fitness stops improving to stop - self.possible_stop = False # possible stop may occur next epoch - - def __call__(self, epoch, fitness): - if fitness >= self.best_fitness: # >= 0 to allow for early zero-fitness stage of training - self.best_epoch = epoch - self.best_fitness = fitness - delta = epoch - self.best_epoch # epochs without improvement - self.possible_stop = delta >= (self.patience - 1) # possible stop may occur next epoch - stop = delta >= self.patience # stop training if patience exceeded - if stop: - LOGGER.info(f'Stopping training early as no improvement observed in last {self.patience} epochs. ' - f'Best results observed at epoch {self.best_epoch}, best model saved as best.pt.\n' - f'To update EarlyStopping(patience={self.patience}) pass a new patience value, ' - f'i.e. `python train.py --patience 300` or use `--patience 0` to disable EarlyStopping.') - return stop - - -class ModelEMA: - """ Updated Exponential Moving Average (EMA) from https://github.com/rwightman/pytorch-image-models - Keeps a moving average of everything in the model state_dict (parameters and buffers) - For EMA details see https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage - """ - - def __init__(self, model, decay=0.9999, tau=2000, updates=0): - # Create EMA - self.ema = deepcopy(de_parallel(model)).eval() # FP32 EMA - self.updates = updates # number of EMA updates - self.decay = lambda x: decay * (1 - math.exp(-x / tau)) # decay exponential ramp (to help early epochs) - for p in self.ema.parameters(): - p.requires_grad_(False) - - def update(self, model): - # Update EMA parameters - self.updates += 1 - d = self.decay(self.updates) - - msd = de_parallel(model).state_dict() # model state_dict - for k, v in self.ema.state_dict().items(): - if v.dtype.is_floating_point: # true for FP16 and FP32 - v *= d - v += (1 - d) * msd[k].detach() - # assert v.dtype == msd[k].dtype == torch.float32, f'{k}: EMA {v.dtype} and model {msd[k].dtype} must be FP32' - - def update_attr(self, model, include=(), exclude=('process_group', 'reducer')): - # Update EMA attributes - copy_attr(self.ema, model, include, exclude) diff --git a/spaces/pikto/Elite-freegpt-webui/client/js/chat.js b/spaces/pikto/Elite-freegpt-webui/client/js/chat.js deleted file mode 100644 index 33e5c31655d4af0f7cbafc16a463c19c84626320..0000000000000000000000000000000000000000 --- a/spaces/pikto/Elite-freegpt-webui/client/js/chat.js +++ /dev/null @@ -1,515 +0,0 @@ -const query = (obj) => - Object.keys(obj) - .map((k) => encodeURIComponent(k) + "=" + encodeURIComponent(obj[k])) - .join("&"); -const url_prefix = document.querySelector('body').getAttribute('data-urlprefix') -const markdown = window.markdownit(); -const message_box = document.getElementById(`messages`); -const message_input = document.getElementById(`message-input`); -const box_conversations = document.querySelector(`.top`); -const spinner = box_conversations.querySelector(".spinner"); -const stop_generating = document.querySelector(`.stop-generating`); -const send_button = document.querySelector(`#send-button`); -const user_image = `User Avatar`; -const gpt_image = `GPT Avatar`; -let prompt_lock = false; - -hljs.addPlugin(new CopyButtonPlugin()); - -message_input.addEventListener("blur", () => { - window.scrollTo(0, 0); -}); - -message_input.addEventListener("focus", () => { - document.documentElement.scrollTop = document.documentElement.scrollHeight; -}); - -const delete_conversations = async () => { - localStorage.clear(); - await new_conversation(); -}; - -const handle_ask = async () => { - message_input.style.height = `80px`; - window.scrollTo(0, 0); - let message = message_input.value; - - if (message.length > 0) { - message_input.value = ``; - message_input.dispatchEvent(new Event("input")); - await ask_gpt(message); - } -}; - -const remove_cancel_button = async () => { - stop_generating.classList.add(`stop-generating-hiding`); - - setTimeout(() => { - stop_generating.classList.remove(`stop-generating-hiding`); - stop_generating.classList.add(`stop-generating-hidden`); - }, 300); -}; - -const ask_gpt = async (message) => { - try { - message_input.value = ``; - message_input.innerHTML = ``; - message_input.innerText = ``; - - add_conversation(window.conversation_id, message.substr(0, 16)); - window.scrollTo(0, 0); - window.controller = new AbortController(); - - jailbreak = document.getElementById("jailbreak"); - model = document.getElementById("model"); - prompt_lock = true; - window.text = ``; - window.token = message_id(); - - stop_generating.classList.remove(`stop-generating-hidden`); - - add_user_message_box(message); - - message_box.scrollTop = message_box.scrollHeight; - window.scrollTo(0, 0); - await new Promise((r) => setTimeout(r, 500)); - window.scrollTo(0, 0); - - message_box.innerHTML += ` -
      -
      - ${gpt_image} -
      -
      -
      -
      -
      - `; - - message_box.scrollTop = message_box.scrollHeight; - window.scrollTo(0, 0); - await new Promise((r) => setTimeout(r, 1000)); - window.scrollTo(0, 0); - - const response = await fetch(`${url_prefix}/backend-api/v2/conversation`, { - method: `POST`, - signal: window.controller.signal, - headers: { - "content-type": `application/json`, - accept: `text/event-stream`, - }, - body: JSON.stringify({ - conversation_id: window.conversation_id, - action: `_ask`, - model: model.options[model.selectedIndex].value, - jailbreak: jailbreak.options[jailbreak.selectedIndex].value, - meta: { - id: window.token, - content: { - conversation: await get_conversation(window.conversation_id), - internet_access: document.getElementById("switch").checked, - content_type: "text", - parts: [ - { - content: message, - role: "user", - }, - ], - }, - }, - }), - }); - - const reader = response.body.getReader(); - - while (true) { - const { value, done } = await reader.read(); - if (done) break; - - chunk = decodeUnicode(new TextDecoder().decode(value)); - - if (chunk.includes(`
      { - const messageDiv = document.createElement("div"); - messageDiv.classList.add("message"); - - const avatarContainer = document.createElement("div"); - avatarContainer.classList.add("avatar-container"); - avatarContainer.innerHTML = user_image; - - const contentDiv = document.createElement("div"); - contentDiv.classList.add("content"); - contentDiv.id = `user_${token}`; - contentDiv.innerText = message; - - messageDiv.appendChild(avatarContainer); - messageDiv.appendChild(contentDiv); - - message_box.appendChild(messageDiv); -}; - -const decodeUnicode = (str) => { - return str.replace(/\\u([a-fA-F0-9]{4})/g, function (match, grp) { - return String.fromCharCode(parseInt(grp, 16)); - }); -}; - -const clear_conversations = async () => { - const elements = box_conversations.childNodes; - let index = elements.length; - - if (index > 0) { - while (index--) { - const element = elements[index]; - if (element.nodeType === Node.ELEMENT_NODE && element.tagName.toLowerCase() !== `button`) { - box_conversations.removeChild(element); - } - } - } -}; - -const clear_conversation = async () => { - let messages = message_box.getElementsByTagName(`div`); - - while (messages.length > 0) { - message_box.removeChild(messages[0]); - } -}; - -const delete_conversation = async (conversation_id) => { - localStorage.removeItem(`conversation:${conversation_id}`); - - if (window.conversation_id == conversation_id) { - await new_conversation(); - } - - await load_conversations(20, 0, true); -}; - -const set_conversation = async (conversation_id) => { - history.pushState({}, null, `${url_prefix}/chat/${conversation_id}`); - window.conversation_id = conversation_id; - - await clear_conversation(); - await load_conversation(conversation_id); - await load_conversations(20, 0, true); -}; - -const new_conversation = async () => { - history.pushState({}, null, `${url_prefix}/chat/`); - window.conversation_id = uuid(); - - await clear_conversation(); - await load_conversations(20, 0, true); -}; - -const load_conversation = async (conversation_id) => { - let conversation = await JSON.parse(localStorage.getItem(`conversation:${conversation_id}`)); - console.log(conversation, conversation_id); - - for (item of conversation.items) { - if (is_assistant(item.role)) { - message_box.innerHTML += load_gpt_message_box(item.content); - } else { - message_box.innerHTML += load_user_message_box(item.content); - } - } - - document.querySelectorAll(`code`).forEach((el) => { - hljs.highlightElement(el); - }); - - message_box.scrollTo({ top: message_box.scrollHeight, behavior: "smooth" }); - - setTimeout(() => { - message_box.scrollTop = message_box.scrollHeight; - }, 500); -}; - -const load_user_message_box = (content) => { - const messageDiv = document.createElement("div"); - messageDiv.classList.add("message"); - - const avatarContainer = document.createElement("div"); - avatarContainer.classList.add("avatar-container"); - avatarContainer.innerHTML = user_image; - - const contentDiv = document.createElement("div"); - contentDiv.classList.add("content"); - contentDiv.innerText = content; - - messageDiv.appendChild(avatarContainer); - messageDiv.appendChild(contentDiv); - - return messageDiv.outerHTML; -}; - -const load_gpt_message_box = (content) => { - return ` -
      -
      - ${gpt_image} -
      -
      - ${markdown.render(content)} -
      -
      - `; -}; - -const is_assistant = (role) => { - return role == "assistant"; -}; - -const get_conversation = async (conversation_id) => { - let conversation = await JSON.parse(localStorage.getItem(`conversation:${conversation_id}`)); - return conversation.items; -}; - -const add_conversation = async (conversation_id, title) => { - if (localStorage.getItem(`conversation:${conversation_id}`) == null) { - localStorage.setItem( - `conversation:${conversation_id}`, - JSON.stringify({ - id: conversation_id, - title: title, - items: [], - }) - ); - } -}; - -const add_message = async (conversation_id, role, content) => { - before_adding = JSON.parse(localStorage.getItem(`conversation:${conversation_id}`)); - - before_adding.items.push({ - role: role, - content: content, - }); - - localStorage.setItem(`conversation:${conversation_id}`, JSON.stringify(before_adding)); // update conversation -}; - -const load_conversations = async (limit, offset, loader) => { - //console.log(loader); - //if (loader === undefined) box_conversations.appendChild(spinner); - - let conversations = []; - for (let i = 0; i < localStorage.length; i++) { - if (localStorage.key(i).startsWith("conversation:")) { - let conversation = localStorage.getItem(localStorage.key(i)); - conversations.push(JSON.parse(conversation)); - } - } - - //if (loader === undefined) spinner.parentNode.removeChild(spinner) - await clear_conversations(); - - for (conversation of conversations) { - box_conversations.innerHTML += ` -
      -
      - - ${conversation.title} -
      - -
      - `; - } - - document.querySelectorAll(`code`).forEach((el) => { - hljs.highlightElement(el); - }); -}; - -document.getElementById(`cancelButton`).addEventListener(`click`, async () => { - window.controller.abort(); - console.log(`aborted ${window.conversation_id}`); -}); - -function h2a(str1) { - var hex = str1.toString(); - var str = ""; - - for (var n = 0; n < hex.length; n += 2) { - str += String.fromCharCode(parseInt(hex.substr(n, 2), 16)); - } - - return str; -} - -const uuid = () => { - return `xxxxxxxx-xxxx-4xxx-yxxx-${Date.now().toString(16)}`.replace(/[xy]/g, function (c) { - var r = (Math.random() * 16) | 0, - v = c == "x" ? r : (r & 0x3) | 0x8; - return v.toString(16); - }); -}; - -const message_id = () => { - random_bytes = (Math.floor(Math.random() * 1338377565) + 2956589730).toString(2); - unix = Math.floor(Date.now() / 1000).toString(2); - - return BigInt(`0b${unix}${random_bytes}`).toString(); -}; - -window.onload = async () => { - load_settings_localstorage(); - - conversations = 0; - for (let i = 0; i < localStorage.length; i++) { - if (localStorage.key(i).startsWith("conversation:")) { - conversations += 1; - } - } - - if (conversations == 0) localStorage.clear(); - - await setTimeout(() => { - load_conversations(20, 0); - }, 1); - - if (!window.location.href.endsWith(`#`)) { - if (/\/chat\/.+/.test(window.location.href.slice(url_prefix.length))) { - await load_conversation(window.conversation_id); - } - } - - message_input.addEventListener("keydown", async (evt) => { - if (prompt_lock) return; - - if (evt.key === "Enter" && !evt.shiftKey) { - evt.preventDefault(); - await handle_ask(); - } - }); - - send_button.addEventListener("click", async (event) => { - event.preventDefault(); - if (prompt_lock) return; - message_input.blur(); - await handle_ask(); - }); - - register_settings_localstorage(); -}; - -document.querySelector(".mobile-sidebar").addEventListener("click", (event) => { - const sidebar = document.querySelector(".sidebar"); - - if (sidebar.classList.contains("shown")) { - sidebar.classList.remove("shown"); - event.target.classList.remove("rotated"); - document.body.style.overflow = "auto"; - } else { - sidebar.classList.add("shown"); - event.target.classList.add("rotated"); - document.body.style.overflow = "hidden"; - } - - window.scrollTo(0, 0); -}); - -const register_settings_localstorage = async () => { - settings_ids = ["switch", "model", "jailbreak"]; - settings_elements = settings_ids.map((id) => document.getElementById(id)); - settings_elements.map((element) => - element.addEventListener(`change`, async (event) => { - switch (event.target.type) { - case "checkbox": - localStorage.setItem(event.target.id, event.target.checked); - break; - case "select-one": - localStorage.setItem(event.target.id, event.target.selectedIndex); - break; - default: - console.warn("Unresolved element type"); - } - }) - ); -}; - -const load_settings_localstorage = async () => { - settings_ids = ["switch", "model", "jailbreak"]; - settings_elements = settings_ids.map((id) => document.getElementById(id)); - settings_elements.map((element) => { - if (localStorage.getItem(element.id)) { - switch (element.type) { - case "checkbox": - element.checked = localStorage.getItem(element.id) === "true"; - break; - case "select-one": - element.selectedIndex = parseInt(localStorage.getItem(element.id)); - break; - default: - console.warn("Unresolved element type"); - } - } - }); -}; - -function clearTextarea(textarea) { - textarea.style.removeProperty("height"); - textarea.style.height = `${textarea.scrollHeight + 4}px`; - - if (textarea.value.trim() === "" && textarea.value.includes("\n")) { - textarea.value = ""; - } -} diff --git a/spaces/prerna9811/Chord/portaudio/include/pa_asio.h b/spaces/prerna9811/Chord/portaudio/include/pa_asio.h deleted file mode 100644 index 27cbd3b81c99725bb23174e0b4f3db7e62343f78..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/include/pa_asio.h +++ /dev/null @@ -1,150 +0,0 @@ -#ifndef PA_ASIO_H -#define PA_ASIO_H -/* - * $Id$ - * PortAudio Portable Real-Time Audio Library - * ASIO specific extensions - * - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - - -/** @file - @ingroup public_header - @brief ASIO-specific PortAudio API extension header file. -*/ - -#include "portaudio.h" - -#ifdef __cplusplus -extern "C" -{ -#endif /* __cplusplus */ - - -/** Retrieve legal native buffer sizes for the specified device, in sample frames. - - @param device The global index of the device about which the query is being made. - @param minBufferSizeFrames A pointer to the location which will receive the minimum buffer size value. - @param maxBufferSizeFrames A pointer to the location which will receive the maximum buffer size value. - @param preferredBufferSizeFrames A pointer to the location which will receive the preferred buffer size value. - @param granularity A pointer to the location which will receive the "granularity". This value determines - the step size used to compute the legal values between minBufferSizeFrames and maxBufferSizeFrames. - If granularity is -1 then available buffer size values are powers of two. - - @see ASIOGetBufferSize in the ASIO SDK. - - @note: this function used to be called PaAsio_GetAvailableLatencyValues. There is a - #define that maps PaAsio_GetAvailableLatencyValues to this function for backwards compatibility. -*/ -PaError PaAsio_GetAvailableBufferSizes( PaDeviceIndex device, - long *minBufferSizeFrames, long *maxBufferSizeFrames, long *preferredBufferSizeFrames, long *granularity ); - - -/** Backwards compatibility alias for PaAsio_GetAvailableBufferSizes - - @see PaAsio_GetAvailableBufferSizes -*/ -#define PaAsio_GetAvailableLatencyValues PaAsio_GetAvailableBufferSizes - - -/** Display the ASIO control panel for the specified device. - - @param device The global index of the device whose control panel is to be displayed. - @param systemSpecific On Windows, the calling application's main window handle, - on Macintosh this value should be zero. -*/ -PaError PaAsio_ShowControlPanel( PaDeviceIndex device, void* systemSpecific ); - - - - -/** Retrieve a pointer to a string containing the name of the specified - input channel. The string is valid until Pa_Terminate is called. - - The string will be no longer than 32 characters including the null terminator. -*/ -PaError PaAsio_GetInputChannelName( PaDeviceIndex device, int channelIndex, - const char** channelName ); - - -/** Retrieve a pointer to a string containing the name of the specified - input channel. The string is valid until Pa_Terminate is called. - - The string will be no longer than 32 characters including the null terminator. -*/ -PaError PaAsio_GetOutputChannelName( PaDeviceIndex device, int channelIndex, - const char** channelName ); - - -/** Set the sample rate of an open paASIO stream. - - @param stream The stream to operate on. - @param sampleRate The new sample rate. - - Note that this function may fail if the stream is already running and the - ASIO driver does not support switching the sample rate of a running stream. - - Returns paIncompatibleStreamHostApi if stream is not a paASIO stream. -*/ -PaError PaAsio_SetStreamSampleRate( PaStream* stream, double sampleRate ); - - -#define paAsioUseChannelSelectors (0x01) - -typedef struct PaAsioStreamInfo{ - unsigned long size; /**< sizeof(PaAsioStreamInfo) */ - PaHostApiTypeId hostApiType; /**< paASIO */ - unsigned long version; /**< 1 */ - - unsigned long flags; - - /* Support for opening only specific channels of an ASIO device. - If the paAsioUseChannelSelectors flag is set, channelSelectors is a - pointer to an array of integers specifying the device channels to use. - When used, the length of the channelSelectors array must match the - corresponding channelCount parameter to Pa_OpenStream() otherwise a - crash may result. - The values in the selectors array must specify channels within the - range of supported channels for the device or paInvalidChannelCount will - result. - */ - int *channelSelectors; -}PaAsioStreamInfo; - - -#ifdef __cplusplus -} -#endif /* __cplusplus */ - -#endif /* PA_ASIO_H */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_B_.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_B_.py deleted file mode 100644 index 8a6c14c444595508c35bdc6ebace60b4bbbbdaba..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_B_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .T_S_I_V_ import table_T_S_I_V_ - - -class table_T_S_I_B_(table_T_S_I_V_): - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/otTraverse.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/otTraverse.py deleted file mode 100644 index bf22dcfdb500cd50525fce749562384a82b1cb0f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/otTraverse.py +++ /dev/null @@ -1,161 +0,0 @@ -"""Methods for traversing trees of otData-driven OpenType tables.""" -from collections import deque -from typing import Callable, Deque, Iterable, List, Optional, Tuple -from .otBase import BaseTable - - -__all__ = [ - "bfs_base_table", - "dfs_base_table", - "SubTablePath", -] - - -class SubTablePath(Tuple[BaseTable.SubTableEntry, ...]): - def __str__(self) -> str: - path_parts = [] - for entry in self: - path_part = entry.name - if entry.index is not None: - path_part += f"[{entry.index}]" - path_parts.append(path_part) - return ".".join(path_parts) - - -# Given f(current frontier, new entries) add new entries to frontier -AddToFrontierFn = Callable[[Deque[SubTablePath], List[SubTablePath]], None] - - -def dfs_base_table( - root: BaseTable, - root_accessor: Optional[str] = None, - skip_root: bool = False, - predicate: Optional[Callable[[SubTablePath], bool]] = None, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - """Depth-first search tree of BaseTables. - - Args: - root (BaseTable): the root of the tree. - root_accessor (Optional[str]): attribute name for the root table, if any (mostly - useful for debugging). - skip_root (Optional[bool]): if True, the root itself is not visited, only its - children. - predicate (Optional[Callable[[SubTablePath], bool]]): function to filter out - paths. If True, the path is yielded and its subtables are added to the - queue. If False, the path is skipped and its subtables are not traversed. - iter_subtables_fn (Optional[Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]]]): - function to iterate over subtables of a table. If None, the default - BaseTable.iterSubTables() is used. - - Yields: - SubTablePath: tuples of BaseTable.SubTableEntry(name, table, index) namedtuples - for each of the nodes in the tree. The last entry in a path is the current - subtable, whereas preceding ones refer to its parent tables all the way up to - the root. - """ - yield from _traverse_ot_data( - root, - root_accessor, - skip_root, - predicate, - lambda frontier, new: frontier.extendleft(reversed(new)), - iter_subtables_fn, - ) - - -def bfs_base_table( - root: BaseTable, - root_accessor: Optional[str] = None, - skip_root: bool = False, - predicate: Optional[Callable[[SubTablePath], bool]] = None, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - """Breadth-first search tree of BaseTables. - - Args: - the root of the tree. - root_accessor (Optional[str]): attribute name for the root table, if any (mostly - useful for debugging). - skip_root (Optional[bool]): if True, the root itself is not visited, only its - children. - predicate (Optional[Callable[[SubTablePath], bool]]): function to filter out - paths. If True, the path is yielded and its subtables are added to the - queue. If False, the path is skipped and its subtables are not traversed. - iter_subtables_fn (Optional[Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]]]): - function to iterate over subtables of a table. If None, the default - BaseTable.iterSubTables() is used. - - Yields: - SubTablePath: tuples of BaseTable.SubTableEntry(name, table, index) namedtuples - for each of the nodes in the tree. The last entry in a path is the current - subtable, whereas preceding ones refer to its parent tables all the way up to - the root. - """ - yield from _traverse_ot_data( - root, - root_accessor, - skip_root, - predicate, - lambda frontier, new: frontier.extend(new), - iter_subtables_fn, - ) - - -def _traverse_ot_data( - root: BaseTable, - root_accessor: Optional[str], - skip_root: bool, - predicate: Optional[Callable[[SubTablePath], bool]], - add_to_frontier_fn: AddToFrontierFn, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - # no visited because general otData cannot cycle (forward-offset only) - if root_accessor is None: - root_accessor = type(root).__name__ - - if predicate is None: - - def predicate(path): - return True - - if iter_subtables_fn is None: - - def iter_subtables_fn(table): - return table.iterSubTables() - - frontier: Deque[SubTablePath] = deque() - - root_entry = BaseTable.SubTableEntry(root_accessor, root) - if not skip_root: - frontier.append((root_entry,)) - else: - add_to_frontier_fn( - frontier, - [ - (root_entry, subtable_entry) - for subtable_entry in iter_subtables_fn(root) - ], - ) - - while frontier: - # path is (value, attr_name) tuples. attr_name is attr of parent to get value - path = frontier.popleft() - current = path[-1].value - - if not predicate(path): - continue - - yield SubTablePath(path) - - new_entries = [ - path + (subtable_entry,) for subtable_entry in iter_subtables_fn(current) - ] - - add_to_frontier_fn(frontier, new_entries) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/lite/index.html b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/lite/index.html deleted file mode 100644 index 0950c155869025b08047eff682152fe164393b0b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/lite/index.html +++ /dev/null @@ -1,49 +0,0 @@ - - - - - - - - - - - - - - - -
      - - - - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/ticker.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/ticker.py deleted file mode 100644 index 958e25d7b2c7bca82f5b9a1e604c52e9b63f9290..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/ticker.py +++ /dev/null @@ -1,2944 +0,0 @@ -""" -Tick locating and formatting -============================ - -This module contains classes for configuring tick locating and formatting. -Generic tick locators and formatters are provided, as well as domain specific -custom ones. - -Although the locators know nothing about major or minor ticks, they are used -by the Axis class to support major and minor tick locating and formatting. - -.. _tick_locating: -.. _locators: - -Tick locating -------------- - -The Locator class is the base class for all tick locators. The locators -handle autoscaling of the view limits based on the data limits, and the -choosing of tick locations. A useful semi-automatic tick locator is -`MultipleLocator`. It is initialized with a base, e.g., 10, and it picks -axis limits and ticks that are multiples of that base. - -The Locator subclasses defined here are: - -======================= ======================================================= -`AutoLocator` `MaxNLocator` with simple defaults. This is the default - tick locator for most plotting. -`MaxNLocator` Finds up to a max number of intervals with ticks at - nice locations. -`LinearLocator` Space ticks evenly from min to max. -`LogLocator` Space ticks logarithmically from min to max. -`MultipleLocator` Ticks and range are a multiple of base; either integer - or float. -`FixedLocator` Tick locations are fixed. -`IndexLocator` Locator for index plots (e.g., where - ``x = range(len(y))``). -`NullLocator` No ticks. -`SymmetricalLogLocator` Locator for use with the symlog norm; works like - `LogLocator` for the part outside of the threshold and - adds 0 if inside the limits. -`AsinhLocator` Locator for use with the asinh norm, attempting to - space ticks approximately uniformly. -`LogitLocator` Locator for logit scaling. -`AutoMinorLocator` Locator for minor ticks when the axis is linear and the - major ticks are uniformly spaced. Subdivides the major - tick interval into a specified number of minor - intervals, defaulting to 4 or 5 depending on the major - interval. -======================= ======================================================= - -There are a number of locators specialized for date locations - see -the :mod:`.dates` module. - -You can define your own locator by deriving from Locator. You must -override the ``__call__`` method, which returns a sequence of locations, -and you will probably want to override the autoscale method to set the -view limits from the data limits. - -If you want to override the default locator, use one of the above or a custom -locator and pass it to the x- or y-axis instance. The relevant methods are:: - - ax.xaxis.set_major_locator(xmajor_locator) - ax.xaxis.set_minor_locator(xminor_locator) - ax.yaxis.set_major_locator(ymajor_locator) - ax.yaxis.set_minor_locator(yminor_locator) - -The default minor locator is `NullLocator`, i.e., no minor ticks on by default. - -.. note:: - `Locator` instances should not be used with more than one - `~matplotlib.axis.Axis` or `~matplotlib.axes.Axes`. So instead of:: - - locator = MultipleLocator(5) - ax.xaxis.set_major_locator(locator) - ax2.xaxis.set_major_locator(locator) - - do the following instead:: - - ax.xaxis.set_major_locator(MultipleLocator(5)) - ax2.xaxis.set_major_locator(MultipleLocator(5)) - -.. _formatters: - -Tick formatting ---------------- - -Tick formatting is controlled by classes derived from Formatter. The formatter -operates on a single tick value and returns a string to the axis. - -========================= ===================================================== -`NullFormatter` No labels on the ticks. -`FixedFormatter` Set the strings manually for the labels. -`FuncFormatter` User defined function sets the labels. -`StrMethodFormatter` Use string `format` method. -`FormatStrFormatter` Use an old-style sprintf format string. -`ScalarFormatter` Default formatter for scalars: autopick the format - string. -`LogFormatter` Formatter for log axes. -`LogFormatterExponent` Format values for log axis using - ``exponent = log_base(value)``. -`LogFormatterMathtext` Format values for log axis using - ``exponent = log_base(value)`` using Math text. -`LogFormatterSciNotation` Format values for log axis using scientific notation. -`LogitFormatter` Probability formatter. -`EngFormatter` Format labels in engineering notation. -`PercentFormatter` Format labels as a percentage. -========================= ===================================================== - -You can derive your own formatter from the Formatter base class by -simply overriding the ``__call__`` method. The formatter class has -access to the axis view and data limits. - -To control the major and minor tick label formats, use one of the -following methods:: - - ax.xaxis.set_major_formatter(xmajor_formatter) - ax.xaxis.set_minor_formatter(xminor_formatter) - ax.yaxis.set_major_formatter(ymajor_formatter) - ax.yaxis.set_minor_formatter(yminor_formatter) - -In addition to a `.Formatter` instance, `~.Axis.set_major_formatter` and -`~.Axis.set_minor_formatter` also accept a ``str`` or function. ``str`` input -will be internally replaced with an autogenerated `.StrMethodFormatter` with -the input ``str``. For function input, a `.FuncFormatter` with the input -function will be generated and used. - -See :doc:`/gallery/ticks/major_minor_demo` for an example of setting major -and minor ticks. See the :mod:`matplotlib.dates` module for more information -and examples of using date locators and formatters. -""" - -import itertools -import logging -import locale -import math -from numbers import Integral - -import numpy as np - -import matplotlib as mpl -from matplotlib import _api, cbook -from matplotlib import transforms as mtransforms - -_log = logging.getLogger(__name__) - -__all__ = ('TickHelper', 'Formatter', 'FixedFormatter', - 'NullFormatter', 'FuncFormatter', 'FormatStrFormatter', - 'StrMethodFormatter', 'ScalarFormatter', 'LogFormatter', - 'LogFormatterExponent', 'LogFormatterMathtext', - 'LogFormatterSciNotation', - 'LogitFormatter', 'EngFormatter', 'PercentFormatter', - 'Locator', 'IndexLocator', 'FixedLocator', 'NullLocator', - 'LinearLocator', 'LogLocator', 'AutoLocator', - 'MultipleLocator', 'MaxNLocator', 'AutoMinorLocator', - 'SymmetricalLogLocator', 'AsinhLocator', 'LogitLocator') - - -class _DummyAxis: - __name__ = "dummy" - - def __init__(self, minpos=0): - self._data_interval = (0, 1) - self._view_interval = (0, 1) - self._minpos = minpos - - def get_view_interval(self): - return self._view_interval - - def set_view_interval(self, vmin, vmax): - self._view_interval = (vmin, vmax) - - def get_minpos(self): - return self._minpos - - def get_data_interval(self): - return self._data_interval - - def set_data_interval(self, vmin, vmax): - self._data_interval = (vmin, vmax) - - def get_tick_space(self): - # Just use the long-standing default of nbins==9 - return 9 - - -class TickHelper: - axis = None - - def set_axis(self, axis): - self.axis = axis - - def create_dummy_axis(self, **kwargs): - if self.axis is None: - self.axis = _DummyAxis(**kwargs) - - -class Formatter(TickHelper): - """ - Create a string based on a tick value and location. - """ - # some classes want to see all the locs to help format - # individual ones - locs = [] - - def __call__(self, x, pos=None): - """ - Return the format for tick value *x* at position pos. - ``pos=None`` indicates an unspecified location. - """ - raise NotImplementedError('Derived must override') - - def format_ticks(self, values): - """Return the tick labels for all the ticks at once.""" - self.set_locs(values) - return [self(value, i) for i, value in enumerate(values)] - - def format_data(self, value): - """ - Return the full string representation of the value with the - position unspecified. - """ - return self.__call__(value) - - def format_data_short(self, value): - """ - Return a short string version of the tick value. - - Defaults to the position-independent long value. - """ - return self.format_data(value) - - def get_offset(self): - return '' - - def set_locs(self, locs): - """ - Set the locations of the ticks. - - This method is called before computing the tick labels because some - formatters need to know all tick locations to do so. - """ - self.locs = locs - - @staticmethod - def fix_minus(s): - """ - Some classes may want to replace a hyphen for minus with the proper - Unicode symbol (U+2212) for typographical correctness. This is a - helper method to perform such a replacement when it is enabled via - :rc:`axes.unicode_minus`. - """ - return (s.replace('-', '\N{MINUS SIGN}') - if mpl.rcParams['axes.unicode_minus'] - else s) - - def _set_locator(self, locator): - """Subclasses may want to override this to set a locator.""" - pass - - -class NullFormatter(Formatter): - """Always return the empty string.""" - - def __call__(self, x, pos=None): - # docstring inherited - return '' - - -class FixedFormatter(Formatter): - """ - Return fixed strings for tick labels based only on position, not value. - - .. note:: - `.FixedFormatter` should only be used together with `.FixedLocator`. - Otherwise, the labels may end up in unexpected positions. - """ - - def __init__(self, seq): - """Set the sequence *seq* of strings that will be used for labels.""" - self.seq = seq - self.offset_string = '' - - def __call__(self, x, pos=None): - """ - Return the label that matches the position, regardless of the value. - - For positions ``pos < len(seq)``, return ``seq[i]`` regardless of - *x*. Otherwise return empty string. ``seq`` is the sequence of - strings that this object was initialized with. - """ - if pos is None or pos >= len(self.seq): - return '' - else: - return self.seq[pos] - - def get_offset(self): - return self.offset_string - - def set_offset_string(self, ofs): - self.offset_string = ofs - - -class FuncFormatter(Formatter): - """ - Use a user-defined function for formatting. - - The function should take in two inputs (a tick value ``x`` and a - position ``pos``), and return a string containing the corresponding - tick label. - """ - - def __init__(self, func): - self.func = func - self.offset_string = "" - - def __call__(self, x, pos=None): - """ - Return the value of the user defined function. - - *x* and *pos* are passed through as-is. - """ - return self.func(x, pos) - - def get_offset(self): - return self.offset_string - - def set_offset_string(self, ofs): - self.offset_string = ofs - - -class FormatStrFormatter(Formatter): - """ - Use an old-style ('%' operator) format string to format the tick. - - The format string should have a single variable format (%) in it. - It will be applied to the value (not the position) of the tick. - - Negative numeric values will use a dash, not a Unicode minus; use mathtext - to get a Unicode minus by wrapping the format specifier with $ (e.g. - "$%g$"). - """ - def __init__(self, fmt): - self.fmt = fmt - - def __call__(self, x, pos=None): - """ - Return the formatted label string. - - Only the value *x* is formatted. The position is ignored. - """ - return self.fmt % x - - -class StrMethodFormatter(Formatter): - """ - Use a new-style format string (as used by `str.format`) to format the tick. - - The field used for the tick value must be labeled *x* and the field used - for the tick position must be labeled *pos*. - """ - def __init__(self, fmt): - self.fmt = fmt - - def __call__(self, x, pos=None): - """ - Return the formatted label string. - - *x* and *pos* are passed to `str.format` as keyword arguments - with those exact names. - """ - return self.fmt.format(x=x, pos=pos) - - -class ScalarFormatter(Formatter): - """ - Format tick values as a number. - - Parameters - ---------- - useOffset : bool or float, default: :rc:`axes.formatter.useoffset` - Whether to use offset notation. See `.set_useOffset`. - useMathText : bool, default: :rc:`axes.formatter.use_mathtext` - Whether to use fancy math formatting. See `.set_useMathText`. - useLocale : bool, default: :rc:`axes.formatter.use_locale`. - Whether to use locale settings for decimal sign and positive sign. - See `.set_useLocale`. - - Notes - ----- - In addition to the parameters above, the formatting of scientific vs. - floating point representation can be configured via `.set_scientific` - and `.set_powerlimits`). - - **Offset notation and scientific notation** - - Offset notation and scientific notation look quite similar at first sight. - Both split some information from the formatted tick values and display it - at the end of the axis. - - - The scientific notation splits up the order of magnitude, i.e. a - multiplicative scaling factor, e.g. ``1e6``. - - - The offset notation separates an additive constant, e.g. ``+1e6``. The - offset notation label is always prefixed with a ``+`` or ``-`` sign - and is thus distinguishable from the order of magnitude label. - - The following plot with x limits ``1_000_000`` to ``1_000_010`` illustrates - the different formatting. Note the labels at the right edge of the x axis. - - .. plot:: - - lim = (1_000_000, 1_000_010) - - fig, (ax1, ax2, ax3) = plt.subplots(3, 1, gridspec_kw={'hspace': 2}) - ax1.set(title='offset_notation', xlim=lim) - ax2.set(title='scientific notation', xlim=lim) - ax2.xaxis.get_major_formatter().set_useOffset(False) - ax3.set(title='floating point notation', xlim=lim) - ax3.xaxis.get_major_formatter().set_useOffset(False) - ax3.xaxis.get_major_formatter().set_scientific(False) - - """ - - def __init__(self, useOffset=None, useMathText=None, useLocale=None): - if useOffset is None: - useOffset = mpl.rcParams['axes.formatter.useoffset'] - self._offset_threshold = \ - mpl.rcParams['axes.formatter.offset_threshold'] - self.set_useOffset(useOffset) - self._usetex = mpl.rcParams['text.usetex'] - self.set_useMathText(useMathText) - self.orderOfMagnitude = 0 - self.format = '' - self._scientific = True - self._powerlimits = mpl.rcParams['axes.formatter.limits'] - self.set_useLocale(useLocale) - - def get_useOffset(self): - """ - Return whether automatic mode for offset notation is active. - - This returns True if ``set_useOffset(True)``; it returns False if an - explicit offset was set, e.g. ``set_useOffset(1000)``. - - See Also - -------- - ScalarFormatter.set_useOffset - """ - return self._useOffset - - def set_useOffset(self, val): - """ - Set whether to use offset notation. - - When formatting a set numbers whose value is large compared to their - range, the formatter can separate an additive constant. This can - shorten the formatted numbers so that they are less likely to overlap - when drawn on an axis. - - Parameters - ---------- - val : bool or float - - If False, do not use offset notation. - - If True (=automatic mode), use offset notation if it can make - the residual numbers significantly shorter. The exact behavior - is controlled by :rc:`axes.formatter.offset_threshold`. - - If a number, force an offset of the given value. - - Examples - -------- - With active offset notation, the values - - ``100_000, 100_002, 100_004, 100_006, 100_008`` - - will be formatted as ``0, 2, 4, 6, 8`` plus an offset ``+1e5``, which - is written to the edge of the axis. - """ - if val in [True, False]: - self.offset = 0 - self._useOffset = val - else: - self._useOffset = False - self.offset = val - - useOffset = property(fget=get_useOffset, fset=set_useOffset) - - def get_useLocale(self): - """ - Return whether locale settings are used for formatting. - - See Also - -------- - ScalarFormatter.set_useLocale - """ - return self._useLocale - - def set_useLocale(self, val): - """ - Set whether to use locale settings for decimal sign and positive sign. - - Parameters - ---------- - val : bool or None - *None* resets to :rc:`axes.formatter.use_locale`. - """ - if val is None: - self._useLocale = mpl.rcParams['axes.formatter.use_locale'] - else: - self._useLocale = val - - useLocale = property(fget=get_useLocale, fset=set_useLocale) - - def _format_maybe_minus_and_locale(self, fmt, arg): - """ - Format *arg* with *fmt*, applying Unicode minus and locale if desired. - """ - return self.fix_minus( - # Escape commas introduced by locale.format_string if using math text, - # but not those present from the beginning in fmt. - (",".join(locale.format_string(part, (arg,), True).replace(",", "{,}") - for part in fmt.split(",")) if self._useMathText - else locale.format_string(fmt, (arg,), True)) - if self._useLocale - else fmt % arg) - - def get_useMathText(self): - """ - Return whether to use fancy math formatting. - - See Also - -------- - ScalarFormatter.set_useMathText - """ - return self._useMathText - - def set_useMathText(self, val): - r""" - Set whether to use fancy math formatting. - - If active, scientific notation is formatted as :math:`1.2 \times 10^3`. - - Parameters - ---------- - val : bool or None - *None* resets to :rc:`axes.formatter.use_mathtext`. - """ - if val is None: - self._useMathText = mpl.rcParams['axes.formatter.use_mathtext'] - if self._useMathText is False: - try: - from matplotlib import font_manager - ufont = font_manager.findfont( - font_manager.FontProperties( - mpl.rcParams["font.family"] - ), - fallback_to_default=False, - ) - except ValueError: - ufont = None - - if ufont == str(cbook._get_data_path("fonts/ttf/cmr10.ttf")): - _api.warn_external( - "cmr10 font should ideally be used with " - "mathtext, set axes.formatter.use_mathtext to True" - ) - else: - self._useMathText = val - - useMathText = property(fget=get_useMathText, fset=set_useMathText) - - def __call__(self, x, pos=None): - """ - Return the format for tick value *x* at position *pos*. - """ - if len(self.locs) == 0: - return '' - else: - xp = (x - self.offset) / (10. ** self.orderOfMagnitude) - if abs(xp) < 1e-8: - xp = 0 - return self._format_maybe_minus_and_locale(self.format, xp) - - def set_scientific(self, b): - """ - Turn scientific notation on or off. - - See Also - -------- - ScalarFormatter.set_powerlimits - """ - self._scientific = bool(b) - - def set_powerlimits(self, lims): - r""" - Set size thresholds for scientific notation. - - Parameters - ---------- - lims : (int, int) - A tuple *(min_exp, max_exp)* containing the powers of 10 that - determine the switchover threshold. For a number representable as - :math:`a \times 10^\mathrm{exp}` with :math:`1 <= |a| < 10`, - scientific notation will be used if ``exp <= min_exp`` or - ``exp >= max_exp``. - - The default limits are controlled by :rc:`axes.formatter.limits`. - - In particular numbers with *exp* equal to the thresholds are - written in scientific notation. - - Typically, *min_exp* will be negative and *max_exp* will be - positive. - - For example, ``formatter.set_powerlimits((-3, 4))`` will provide - the following formatting: - :math:`1 \times 10^{-3}, 9.9 \times 10^{-3}, 0.01,` - :math:`9999, 1 \times 10^4`. - - See Also - -------- - ScalarFormatter.set_scientific - """ - if len(lims) != 2: - raise ValueError("'lims' must be a sequence of length 2") - self._powerlimits = lims - - def format_data_short(self, value): - # docstring inherited - if value is np.ma.masked: - return "" - if isinstance(value, Integral): - fmt = "%d" - else: - if getattr(self.axis, "__name__", "") in ["xaxis", "yaxis"]: - if self.axis.__name__ == "xaxis": - axis_trf = self.axis.axes.get_xaxis_transform() - axis_inv_trf = axis_trf.inverted() - screen_xy = axis_trf.transform((value, 0)) - neighbor_values = axis_inv_trf.transform( - screen_xy + [[-1, 0], [+1, 0]])[:, 0] - else: # yaxis: - axis_trf = self.axis.axes.get_yaxis_transform() - axis_inv_trf = axis_trf.inverted() - screen_xy = axis_trf.transform((0, value)) - neighbor_values = axis_inv_trf.transform( - screen_xy + [[0, -1], [0, +1]])[:, 1] - delta = abs(neighbor_values - value).max() - else: - # Rough approximation: no more than 1e4 divisions. - a, b = self.axis.get_view_interval() - delta = (b - a) / 1e4 - fmt = f"%-#.{cbook._g_sig_digits(value, delta)}g" - return self._format_maybe_minus_and_locale(fmt, value) - - def format_data(self, value): - # docstring inherited - e = math.floor(math.log10(abs(value))) - s = round(value / 10**e, 10) - significand = self._format_maybe_minus_and_locale( - "%d" if s % 1 == 0 else "%1.10g", s) - if e == 0: - return significand - exponent = self._format_maybe_minus_and_locale("%d", e) - if self._useMathText or self._usetex: - exponent = "10^{%s}" % exponent - return (exponent if s == 1 # reformat 1x10^y as 10^y - else rf"{significand} \times {exponent}") - else: - return f"{significand}e{exponent}" - - def get_offset(self): - """ - Return scientific notation, plus offset. - """ - if len(self.locs) == 0: - return '' - if self.orderOfMagnitude or self.offset: - offsetStr = '' - sciNotStr = '' - if self.offset: - offsetStr = self.format_data(self.offset) - if self.offset > 0: - offsetStr = '+' + offsetStr - if self.orderOfMagnitude: - if self._usetex or self._useMathText: - sciNotStr = self.format_data(10 ** self.orderOfMagnitude) - else: - sciNotStr = '1e%d' % self.orderOfMagnitude - if self._useMathText or self._usetex: - if sciNotStr != '': - sciNotStr = r'\times\mathdefault{%s}' % sciNotStr - s = fr'${sciNotStr}\mathdefault{{{offsetStr}}}$' - else: - s = ''.join((sciNotStr, offsetStr)) - return self.fix_minus(s) - return '' - - def set_locs(self, locs): - # docstring inherited - self.locs = locs - if len(self.locs) > 0: - if self._useOffset: - self._compute_offset() - self._set_order_of_magnitude() - self._set_format() - - def _compute_offset(self): - locs = self.locs - # Restrict to visible ticks. - vmin, vmax = sorted(self.axis.get_view_interval()) - locs = np.asarray(locs) - locs = locs[(vmin <= locs) & (locs <= vmax)] - if not len(locs): - self.offset = 0 - return - lmin, lmax = locs.min(), locs.max() - # Only use offset if there are at least two ticks and every tick has - # the same sign. - if lmin == lmax or lmin <= 0 <= lmax: - self.offset = 0 - return - # min, max comparing absolute values (we want division to round towards - # zero so we work on absolute values). - abs_min, abs_max = sorted([abs(float(lmin)), abs(float(lmax))]) - sign = math.copysign(1, lmin) - # What is the smallest power of ten such that abs_min and abs_max are - # equal up to that precision? - # Note: Internally using oom instead of 10 ** oom avoids some numerical - # accuracy issues. - oom_max = np.ceil(math.log10(abs_max)) - oom = 1 + next(oom for oom in itertools.count(oom_max, -1) - if abs_min // 10 ** oom != abs_max // 10 ** oom) - if (abs_max - abs_min) / 10 ** oom <= 1e-2: - # Handle the case of straddling a multiple of a large power of ten - # (relative to the span). - # What is the smallest power of ten such that abs_min and abs_max - # are no more than 1 apart at that precision? - oom = 1 + next(oom for oom in itertools.count(oom_max, -1) - if abs_max // 10 ** oom - abs_min // 10 ** oom > 1) - # Only use offset if it saves at least _offset_threshold digits. - n = self._offset_threshold - 1 - self.offset = (sign * (abs_max // 10 ** oom) * 10 ** oom - if abs_max // 10 ** oom >= 10**n - else 0) - - def _set_order_of_magnitude(self): - # if scientific notation is to be used, find the appropriate exponent - # if using a numerical offset, find the exponent after applying the - # offset. When lower power limit = upper <> 0, use provided exponent. - if not self._scientific: - self.orderOfMagnitude = 0 - return - if self._powerlimits[0] == self._powerlimits[1] != 0: - # fixed scaling when lower power limit = upper <> 0. - self.orderOfMagnitude = self._powerlimits[0] - return - # restrict to visible ticks - vmin, vmax = sorted(self.axis.get_view_interval()) - locs = np.asarray(self.locs) - locs = locs[(vmin <= locs) & (locs <= vmax)] - locs = np.abs(locs) - if not len(locs): - self.orderOfMagnitude = 0 - return - if self.offset: - oom = math.floor(math.log10(vmax - vmin)) - else: - val = locs.max() - if val == 0: - oom = 0 - else: - oom = math.floor(math.log10(val)) - if oom <= self._powerlimits[0]: - self.orderOfMagnitude = oom - elif oom >= self._powerlimits[1]: - self.orderOfMagnitude = oom - else: - self.orderOfMagnitude = 0 - - def _set_format(self): - # set the format string to format all the ticklabels - if len(self.locs) < 2: - # Temporarily augment the locations with the axis end points. - _locs = [*self.locs, *self.axis.get_view_interval()] - else: - _locs = self.locs - locs = (np.asarray(_locs) - self.offset) / 10. ** self.orderOfMagnitude - loc_range = np.ptp(locs) - # Curvilinear coordinates can yield two identical points. - if loc_range == 0: - loc_range = np.max(np.abs(locs)) - # Both points might be zero. - if loc_range == 0: - loc_range = 1 - if len(self.locs) < 2: - # We needed the end points only for the loc_range calculation. - locs = locs[:-2] - loc_range_oom = int(math.floor(math.log10(loc_range))) - # first estimate: - sigfigs = max(0, 3 - loc_range_oom) - # refined estimate: - thresh = 1e-3 * 10 ** loc_range_oom - while sigfigs >= 0: - if np.abs(locs - np.round(locs, decimals=sigfigs)).max() < thresh: - sigfigs -= 1 - else: - break - sigfigs += 1 - self.format = f'%1.{sigfigs}f' - if self._usetex or self._useMathText: - self.format = r'$\mathdefault{%s}$' % self.format - - -class LogFormatter(Formatter): - """ - Base class for formatting ticks on a log or symlog scale. - - It may be instantiated directly, or subclassed. - - Parameters - ---------- - base : float, default: 10. - Base of the logarithm used in all calculations. - - labelOnlyBase : bool, default: False - If True, label ticks only at integer powers of base. - This is normally True for major ticks and False for - minor ticks. - - minor_thresholds : (subset, all), default: (1, 0.4) - If labelOnlyBase is False, these two numbers control - the labeling of ticks that are not at integer powers of - base; normally these are the minor ticks. The controlling - parameter is the log of the axis data range. In the typical - case where base is 10 it is the number of decades spanned - by the axis, so we can call it 'numdec'. If ``numdec <= all``, - all minor ticks will be labeled. If ``all < numdec <= subset``, - then only a subset of minor ticks will be labeled, so as to - avoid crowding. If ``numdec > subset`` then no minor ticks will - be labeled. - - linthresh : None or float, default: None - If a symmetric log scale is in use, its ``linthresh`` - parameter must be supplied here. - - Notes - ----- - The `set_locs` method must be called to enable the subsetting - logic controlled by the ``minor_thresholds`` parameter. - - In some cases such as the colorbar, there is no distinction between - major and minor ticks; the tick locations might be set manually, - or by a locator that puts ticks at integer powers of base and - at intermediate locations. For this situation, disable the - minor_thresholds logic by using ``minor_thresholds=(np.inf, np.inf)``, - so that all ticks will be labeled. - - To disable labeling of minor ticks when 'labelOnlyBase' is False, - use ``minor_thresholds=(0, 0)``. This is the default for the - "classic" style. - - Examples - -------- - To label a subset of minor ticks when the view limits span up - to 2 decades, and all of the ticks when zoomed in to 0.5 decades - or less, use ``minor_thresholds=(2, 0.5)``. - - To label all minor ticks when the view limits span up to 1.5 - decades, use ``minor_thresholds=(1.5, 1.5)``. - """ - - def __init__(self, base=10.0, labelOnlyBase=False, - minor_thresholds=None, - linthresh=None): - - self.set_base(base) - self.set_label_minor(labelOnlyBase) - if minor_thresholds is None: - if mpl.rcParams['_internal.classic_mode']: - minor_thresholds = (0, 0) - else: - minor_thresholds = (1, 0.4) - self.minor_thresholds = minor_thresholds - self._sublabels = None - self._linthresh = linthresh - - def set_base(self, base): - """ - Change the *base* for labeling. - - .. warning:: - Should always match the base used for :class:`LogLocator` - """ - self._base = float(base) - - def set_label_minor(self, labelOnlyBase): - """ - Switch minor tick labeling on or off. - - Parameters - ---------- - labelOnlyBase : bool - If True, label ticks only at integer powers of base. - """ - self.labelOnlyBase = labelOnlyBase - - def set_locs(self, locs=None): - """ - Use axis view limits to control which ticks are labeled. - - The *locs* parameter is ignored in the present algorithm. - """ - if np.isinf(self.minor_thresholds[0]): - self._sublabels = None - return - - # Handle symlog case: - linthresh = self._linthresh - if linthresh is None: - try: - linthresh = self.axis.get_transform().linthresh - except AttributeError: - pass - - vmin, vmax = self.axis.get_view_interval() - if vmin > vmax: - vmin, vmax = vmax, vmin - - if linthresh is None and vmin <= 0: - # It's probably a colorbar with - # a format kwarg setting a LogFormatter in the manner - # that worked with 1.5.x, but that doesn't work now. - self._sublabels = {1} # label powers of base - return - - b = self._base - if linthresh is not None: # symlog - # Only compute the number of decades in the logarithmic part of the - # axis - numdec = 0 - if vmin < -linthresh: - rhs = min(vmax, -linthresh) - numdec += math.log(vmin / rhs) / math.log(b) - if vmax > linthresh: - lhs = max(vmin, linthresh) - numdec += math.log(vmax / lhs) / math.log(b) - else: - vmin = math.log(vmin) / math.log(b) - vmax = math.log(vmax) / math.log(b) - numdec = abs(vmax - vmin) - - if numdec > self.minor_thresholds[0]: - # Label only bases - self._sublabels = {1} - elif numdec > self.minor_thresholds[1]: - # Add labels between bases at log-spaced coefficients; - # include base powers in case the locations include - # "major" and "minor" points, as in colorbar. - c = np.geomspace(1, b, int(b)//2 + 1) - self._sublabels = set(np.round(c)) - # For base 10, this yields (1, 2, 3, 4, 6, 10). - else: - # Label all integer multiples of base**n. - self._sublabels = set(np.arange(1, b + 1)) - - def _num_to_string(self, x, vmin, vmax): - if x > 10000: - s = '%1.0e' % x - elif x < 1: - s = '%1.0e' % x - else: - s = self._pprint_val(x, vmax - vmin) - return s - - def __call__(self, x, pos=None): - # docstring inherited - if x == 0.0: # Symlog - return '0' - - x = abs(x) - b = self._base - # only label the decades - fx = math.log(x) / math.log(b) - is_x_decade = _is_close_to_int(fx) - exponent = round(fx) if is_x_decade else np.floor(fx) - coeff = round(b ** (fx - exponent)) - - if self.labelOnlyBase and not is_x_decade: - return '' - if self._sublabels is not None and coeff not in self._sublabels: - return '' - - vmin, vmax = self.axis.get_view_interval() - vmin, vmax = mtransforms.nonsingular(vmin, vmax, expander=0.05) - s = self._num_to_string(x, vmin, vmax) - return self.fix_minus(s) - - def format_data(self, value): - with cbook._setattr_cm(self, labelOnlyBase=False): - return cbook.strip_math(self.__call__(value)) - - def format_data_short(self, value): - # docstring inherited - return '%-12g' % value - - def _pprint_val(self, x, d): - # If the number is not too big and it's an int, format it as an int. - if abs(x) < 1e4 and x == int(x): - return '%d' % x - fmt = ('%1.3e' if d < 1e-2 else - '%1.3f' if d <= 1 else - '%1.2f' if d <= 10 else - '%1.1f' if d <= 1e5 else - '%1.1e') - s = fmt % x - tup = s.split('e') - if len(tup) == 2: - mantissa = tup[0].rstrip('0').rstrip('.') - exponent = int(tup[1]) - if exponent: - s = '%se%d' % (mantissa, exponent) - else: - s = mantissa - else: - s = s.rstrip('0').rstrip('.') - return s - - -class LogFormatterExponent(LogFormatter): - """ - Format values for log axis using ``exponent = log_base(value)``. - """ - def _num_to_string(self, x, vmin, vmax): - fx = math.log(x) / math.log(self._base) - if abs(fx) > 10000: - s = '%1.0g' % fx - elif abs(fx) < 1: - s = '%1.0g' % fx - else: - fd = math.log(vmax - vmin) / math.log(self._base) - s = self._pprint_val(fx, fd) - return s - - -class LogFormatterMathtext(LogFormatter): - """ - Format values for log axis using ``exponent = log_base(value)``. - """ - - def _non_decade_format(self, sign_string, base, fx, usetex): - """Return string for non-decade locations.""" - return r'$\mathdefault{%s%s^{%.2f}}$' % (sign_string, base, fx) - - def __call__(self, x, pos=None): - # docstring inherited - if x == 0: # Symlog - return r'$\mathdefault{0}$' - - sign_string = '-' if x < 0 else '' - x = abs(x) - b = self._base - - # only label the decades - fx = math.log(x) / math.log(b) - is_x_decade = _is_close_to_int(fx) - exponent = round(fx) if is_x_decade else np.floor(fx) - coeff = round(b ** (fx - exponent)) - - if self.labelOnlyBase and not is_x_decade: - return '' - if self._sublabels is not None and coeff not in self._sublabels: - return '' - - if is_x_decade: - fx = round(fx) - - # use string formatting of the base if it is not an integer - if b % 1 == 0.0: - base = '%d' % b - else: - base = '%s' % b - - if abs(fx) < mpl.rcParams['axes.formatter.min_exponent']: - return r'$\mathdefault{%s%g}$' % (sign_string, x) - elif not is_x_decade: - usetex = mpl.rcParams['text.usetex'] - return self._non_decade_format(sign_string, base, fx, usetex) - else: - return r'$\mathdefault{%s%s^{%d}}$' % (sign_string, base, fx) - - -class LogFormatterSciNotation(LogFormatterMathtext): - """ - Format values following scientific notation in a logarithmic axis. - """ - - def _non_decade_format(self, sign_string, base, fx, usetex): - """Return string for non-decade locations.""" - b = float(base) - exponent = math.floor(fx) - coeff = b ** (fx - exponent) - if _is_close_to_int(coeff): - coeff = round(coeff) - return r'$\mathdefault{%s%g\times%s^{%d}}$' \ - % (sign_string, coeff, base, exponent) - - -class LogitFormatter(Formatter): - """ - Probability formatter (using Math text). - """ - - def __init__( - self, - *, - use_overline=False, - one_half=r"\frac{1}{2}", - minor=False, - minor_threshold=25, - minor_number=6, - ): - r""" - Parameters - ---------- - use_overline : bool, default: False - If x > 1/2, with x = 1-v, indicate if x should be displayed as - $\overline{v}$. The default is to display $1-v$. - - one_half : str, default: r"\frac{1}{2}" - The string used to represent 1/2. - - minor : bool, default: False - Indicate if the formatter is formatting minor ticks or not. - Basically minor ticks are not labelled, except when only few ticks - are provided, ticks with most space with neighbor ticks are - labelled. See other parameters to change the default behavior. - - minor_threshold : int, default: 25 - Maximum number of locs for labelling some minor ticks. This - parameter have no effect if minor is False. - - minor_number : int, default: 6 - Number of ticks which are labelled when the number of ticks is - below the threshold. - """ - self._use_overline = use_overline - self._one_half = one_half - self._minor = minor - self._labelled = set() - self._minor_threshold = minor_threshold - self._minor_number = minor_number - - def use_overline(self, use_overline): - r""" - Switch display mode with overline for labelling p>1/2. - - Parameters - ---------- - use_overline : bool, default: False - If x > 1/2, with x = 1-v, indicate if x should be displayed as - $\overline{v}$. The default is to display $1-v$. - """ - self._use_overline = use_overline - - def set_one_half(self, one_half): - r""" - Set the way one half is displayed. - - one_half : str, default: r"\frac{1}{2}" - The string used to represent 1/2. - """ - self._one_half = one_half - - def set_minor_threshold(self, minor_threshold): - """ - Set the threshold for labelling minors ticks. - - Parameters - ---------- - minor_threshold : int - Maximum number of locations for labelling some minor ticks. This - parameter have no effect if minor is False. - """ - self._minor_threshold = minor_threshold - - def set_minor_number(self, minor_number): - """ - Set the number of minor ticks to label when some minor ticks are - labelled. - - Parameters - ---------- - minor_number : int - Number of ticks which are labelled when the number of ticks is - below the threshold. - """ - self._minor_number = minor_number - - def set_locs(self, locs): - self.locs = np.array(locs) - self._labelled.clear() - - if not self._minor: - return None - if all( - _is_decade(x, rtol=1e-7) - or _is_decade(1 - x, rtol=1e-7) - or (_is_close_to_int(2 * x) and - int(np.round(2 * x)) == 1) - for x in locs - ): - # minor ticks are subsample from ideal, so no label - return None - if len(locs) < self._minor_threshold: - if len(locs) < self._minor_number: - self._labelled.update(locs) - else: - # we do not have a lot of minor ticks, so only few decades are - # displayed, then we choose some (spaced) minor ticks to label. - # Only minor ticks are known, we assume it is sufficient to - # choice which ticks are displayed. - # For each ticks we compute the distance between the ticks and - # the previous, and between the ticks and the next one. Ticks - # with smallest minimum are chosen. As tiebreak, the ticks - # with smallest sum is chosen. - diff = np.diff(-np.log(1 / self.locs - 1)) - space_pessimistic = np.minimum( - np.concatenate(((np.inf,), diff)), - np.concatenate((diff, (np.inf,))), - ) - space_sum = ( - np.concatenate(((0,), diff)) - + np.concatenate((diff, (0,))) - ) - good_minor = sorted( - range(len(self.locs)), - key=lambda i: (space_pessimistic[i], space_sum[i]), - )[-self._minor_number:] - self._labelled.update(locs[i] for i in good_minor) - - def _format_value(self, x, locs, sci_notation=True): - if sci_notation: - exponent = math.floor(np.log10(x)) - min_precision = 0 - else: - exponent = 0 - min_precision = 1 - value = x * 10 ** (-exponent) - if len(locs) < 2: - precision = min_precision - else: - diff = np.sort(np.abs(locs - x))[1] - precision = -np.log10(diff) + exponent - precision = ( - int(np.round(precision)) - if _is_close_to_int(precision) - else math.ceil(precision) - ) - if precision < min_precision: - precision = min_precision - mantissa = r"%.*f" % (precision, value) - if not sci_notation: - return mantissa - s = r"%s\cdot10^{%d}" % (mantissa, exponent) - return s - - def _one_minus(self, s): - if self._use_overline: - return r"\overline{%s}" % s - else: - return f"1-{s}" - - def __call__(self, x, pos=None): - if self._minor and x not in self._labelled: - return "" - if x <= 0 or x >= 1: - return "" - if _is_close_to_int(2 * x) and round(2 * x) == 1: - s = self._one_half - elif x < 0.5 and _is_decade(x, rtol=1e-7): - exponent = round(math.log10(x)) - s = "10^{%d}" % exponent - elif x > 0.5 and _is_decade(1 - x, rtol=1e-7): - exponent = round(math.log10(1 - x)) - s = self._one_minus("10^{%d}" % exponent) - elif x < 0.1: - s = self._format_value(x, self.locs) - elif x > 0.9: - s = self._one_minus(self._format_value(1-x, 1-self.locs)) - else: - s = self._format_value(x, self.locs, sci_notation=False) - return r"$\mathdefault{%s}$" % s - - def format_data_short(self, value): - # docstring inherited - # Thresholds chosen to use scientific notation iff exponent <= -2. - if value < 0.1: - return f"{value:e}" - if value < 0.9: - return f"{value:f}" - return f"1-{1 - value:e}" - - -class EngFormatter(Formatter): - """ - Format axis values using engineering prefixes to represent powers - of 1000, plus a specified unit, e.g., 10 MHz instead of 1e7. - """ - - # The SI engineering prefixes - ENG_PREFIXES = { - -30: "q", - -27: "r", - -24: "y", - -21: "z", - -18: "a", - -15: "f", - -12: "p", - -9: "n", - -6: "\N{MICRO SIGN}", - -3: "m", - 0: "", - 3: "k", - 6: "M", - 9: "G", - 12: "T", - 15: "P", - 18: "E", - 21: "Z", - 24: "Y", - 27: "R", - 30: "Q" - } - - def __init__(self, unit="", places=None, sep=" ", *, usetex=None, - useMathText=None): - r""" - Parameters - ---------- - unit : str, default: "" - Unit symbol to use, suitable for use with single-letter - representations of powers of 1000. For example, 'Hz' or 'm'. - - places : int, default: None - Precision with which to display the number, specified in - digits after the decimal point (there will be between one - and three digits before the decimal point). If it is None, - the formatting falls back to the floating point format '%g', - which displays up to 6 *significant* digits, i.e. the equivalent - value for *places* varies between 0 and 5 (inclusive). - - sep : str, default: " " - Separator used between the value and the prefix/unit. For - example, one get '3.14 mV' if ``sep`` is " " (default) and - '3.14mV' if ``sep`` is "". Besides the default behavior, some - other useful options may be: - - * ``sep=""`` to append directly the prefix/unit to the value; - * ``sep="\N{THIN SPACE}"`` (``U+2009``); - * ``sep="\N{NARROW NO-BREAK SPACE}"`` (``U+202F``); - * ``sep="\N{NO-BREAK SPACE}"`` (``U+00A0``). - - usetex : bool, default: :rc:`text.usetex` - To enable/disable the use of TeX's math mode for rendering the - numbers in the formatter. - - useMathText : bool, default: :rc:`axes.formatter.use_mathtext` - To enable/disable the use mathtext for rendering the numbers in - the formatter. - """ - self.unit = unit - self.places = places - self.sep = sep - self.set_usetex(usetex) - self.set_useMathText(useMathText) - - def get_usetex(self): - return self._usetex - - def set_usetex(self, val): - if val is None: - self._usetex = mpl.rcParams['text.usetex'] - else: - self._usetex = val - - usetex = property(fget=get_usetex, fset=set_usetex) - - def get_useMathText(self): - return self._useMathText - - def set_useMathText(self, val): - if val is None: - self._useMathText = mpl.rcParams['axes.formatter.use_mathtext'] - else: - self._useMathText = val - - useMathText = property(fget=get_useMathText, fset=set_useMathText) - - def __call__(self, x, pos=None): - s = f"{self.format_eng(x)}{self.unit}" - # Remove the trailing separator when there is neither prefix nor unit - if self.sep and s.endswith(self.sep): - s = s[:-len(self.sep)] - return self.fix_minus(s) - - def format_eng(self, num): - """ - Format a number in engineering notation, appending a letter - representing the power of 1000 of the original number. - Some examples: - - >>> format_eng(0) # for self.places = 0 - '0' - - >>> format_eng(1000000) # for self.places = 1 - '1.0 M' - - >>> format_eng(-1e-6) # for self.places = 2 - '-1.00 \N{MICRO SIGN}' - """ - sign = 1 - fmt = "g" if self.places is None else f".{self.places:d}f" - - if num < 0: - sign = -1 - num = -num - - if num != 0: - pow10 = int(math.floor(math.log10(num) / 3) * 3) - else: - pow10 = 0 - # Force num to zero, to avoid inconsistencies like - # format_eng(-0) = "0" and format_eng(0.0) = "0" - # but format_eng(-0.0) = "-0.0" - num = 0.0 - - pow10 = np.clip(pow10, min(self.ENG_PREFIXES), max(self.ENG_PREFIXES)) - - mant = sign * num / (10.0 ** pow10) - # Taking care of the cases like 999.9..., which may be rounded to 1000 - # instead of 1 k. Beware of the corner case of values that are beyond - # the range of SI prefixes (i.e. > 'Y'). - if (abs(float(format(mant, fmt))) >= 1000 - and pow10 < max(self.ENG_PREFIXES)): - mant /= 1000 - pow10 += 3 - - prefix = self.ENG_PREFIXES[int(pow10)] - if self._usetex or self._useMathText: - formatted = f"${mant:{fmt}}${self.sep}{prefix}" - else: - formatted = f"{mant:{fmt}}{self.sep}{prefix}" - - return formatted - - -class PercentFormatter(Formatter): - """ - Format numbers as a percentage. - - Parameters - ---------- - xmax : float - Determines how the number is converted into a percentage. - *xmax* is the data value that corresponds to 100%. - Percentages are computed as ``x / xmax * 100``. So if the data is - already scaled to be percentages, *xmax* will be 100. Another common - situation is where *xmax* is 1.0. - - decimals : None or int - The number of decimal places to place after the point. - If *None* (the default), the number will be computed automatically. - - symbol : str or None - A string that will be appended to the label. It may be - *None* or empty to indicate that no symbol should be used. LaTeX - special characters are escaped in *symbol* whenever latex mode is - enabled, unless *is_latex* is *True*. - - is_latex : bool - If *False*, reserved LaTeX characters in *symbol* will be escaped. - """ - def __init__(self, xmax=100, decimals=None, symbol='%', is_latex=False): - self.xmax = xmax + 0.0 - self.decimals = decimals - self._symbol = symbol - self._is_latex = is_latex - - def __call__(self, x, pos=None): - """Format the tick as a percentage with the appropriate scaling.""" - ax_min, ax_max = self.axis.get_view_interval() - display_range = abs(ax_max - ax_min) - return self.fix_minus(self.format_pct(x, display_range)) - - def format_pct(self, x, display_range): - """ - Format the number as a percentage number with the correct - number of decimals and adds the percent symbol, if any. - - If ``self.decimals`` is `None`, the number of digits after the - decimal point is set based on the *display_range* of the axis - as follows: - - ============= ======== ======================= - display_range decimals sample - ============= ======== ======================= - >50 0 ``x = 34.5`` => 35% - >5 1 ``x = 34.5`` => 34.5% - >0.5 2 ``x = 34.5`` => 34.50% - ... ... ... - ============= ======== ======================= - - This method will not be very good for tiny axis ranges or - extremely large ones. It assumes that the values on the chart - are percentages displayed on a reasonable scale. - """ - x = self.convert_to_pct(x) - if self.decimals is None: - # conversion works because display_range is a difference - scaled_range = self.convert_to_pct(display_range) - if scaled_range <= 0: - decimals = 0 - else: - # Luckily Python's built-in ceil rounds to +inf, not away from - # zero. This is very important since the equation for decimals - # starts out as `scaled_range > 0.5 * 10**(2 - decimals)` - # and ends up with `decimals > 2 - log10(2 * scaled_range)`. - decimals = math.ceil(2.0 - math.log10(2.0 * scaled_range)) - if decimals > 5: - decimals = 5 - elif decimals < 0: - decimals = 0 - else: - decimals = self.decimals - s = f'{x:0.{int(decimals)}f}' - - return s + self.symbol - - def convert_to_pct(self, x): - return 100.0 * (x / self.xmax) - - @property - def symbol(self): - r""" - The configured percent symbol as a string. - - If LaTeX is enabled via :rc:`text.usetex`, the special characters - ``{'#', '$', '%', '&', '~', '_', '^', '\', '{', '}'}`` are - automatically escaped in the string. - """ - symbol = self._symbol - if not symbol: - symbol = '' - elif not self._is_latex and mpl.rcParams['text.usetex']: - # Source: http://www.personal.ceu.hu/tex/specchar.htm - # Backslash must be first for this to work correctly since - # it keeps getting added in - for spec in r'\#$%&~_^{}': - symbol = symbol.replace(spec, '\\' + spec) - return symbol - - @symbol.setter - def symbol(self, symbol): - self._symbol = symbol - - -class Locator(TickHelper): - """ - Determine the tick locations; - - Note that the same locator should not be used across multiple - `~matplotlib.axis.Axis` because the locator stores references to the Axis - data and view limits. - """ - - # Some automatic tick locators can generate so many ticks they - # kill the machine when you try and render them. - # This parameter is set to cause locators to raise an error if too - # many ticks are generated. - MAXTICKS = 1000 - - def tick_values(self, vmin, vmax): - """ - Return the values of the located ticks given **vmin** and **vmax**. - - .. note:: - To get tick locations with the vmin and vmax values defined - automatically for the associated ``axis`` simply call - the Locator instance:: - - >>> print(type(loc)) - - >>> print(loc()) - [1, 2, 3, 4] - - """ - raise NotImplementedError('Derived must override') - - def set_params(self, **kwargs): - """ - Do nothing, and raise a warning. Any locator class not supporting the - set_params() function will call this. - """ - _api.warn_external( - "'set_params()' not defined for locator of type " + - str(type(self))) - - def __call__(self): - """Return the locations of the ticks.""" - # note: some locators return data limits, other return view limits, - # hence there is no *one* interface to call self.tick_values. - raise NotImplementedError('Derived must override') - - def raise_if_exceeds(self, locs): - """ - Log at WARNING level if *locs* is longer than `Locator.MAXTICKS`. - - This is intended to be called immediately before returning *locs* from - ``__call__`` to inform users in case their Locator returns a huge - number of ticks, causing Matplotlib to run out of memory. - - The "strange" name of this method dates back to when it would raise an - exception instead of emitting a log. - """ - if len(locs) >= self.MAXTICKS: - _log.warning( - "Locator attempting to generate %s ticks ([%s, ..., %s]), " - "which exceeds Locator.MAXTICKS (%s).", - len(locs), locs[0], locs[-1], self.MAXTICKS) - return locs - - def nonsingular(self, v0, v1): - """ - Adjust a range as needed to avoid singularities. - - This method gets called during autoscaling, with ``(v0, v1)`` set to - the data limits on the axes if the axes contains any data, or - ``(-inf, +inf)`` if not. - - - If ``v0 == v1`` (possibly up to some floating point slop), this - method returns an expanded interval around this value. - - If ``(v0, v1) == (-inf, +inf)``, this method returns appropriate - default view limits. - - Otherwise, ``(v0, v1)`` is returned without modification. - """ - return mtransforms.nonsingular(v0, v1, expander=.05) - - def view_limits(self, vmin, vmax): - """ - Select a scale for the range from vmin to vmax. - - Subclasses should override this method to change locator behaviour. - """ - return mtransforms.nonsingular(vmin, vmax) - - -class IndexLocator(Locator): - """ - Place a tick on every multiple of some base number of points - plotted, e.g., on every 5th point. It is assumed that you are doing - index plotting; i.e., the axis is 0, len(data). This is mainly - useful for x ticks. - """ - def __init__(self, base, offset): - """Place ticks every *base* data point, starting at *offset*.""" - self._base = base - self.offset = offset - - def set_params(self, base=None, offset=None): - """Set parameters within this locator""" - if base is not None: - self._base = base - if offset is not None: - self.offset = offset - - def __call__(self): - """Return the locations of the ticks""" - dmin, dmax = self.axis.get_data_interval() - return self.tick_values(dmin, dmax) - - def tick_values(self, vmin, vmax): - return self.raise_if_exceeds( - np.arange(vmin + self.offset, vmax + 1, self._base)) - - -class FixedLocator(Locator): - """ - Tick locations are fixed at *locs*. If *nbins* is not None, - the *locs* array of possible positions will be subsampled to - keep the number of ticks <= *nbins* +1. - The subsampling will be done to include the smallest - absolute value; for example, if zero is included in the - array of possibilities, then it is guaranteed to be one of - the chosen ticks. - """ - - def __init__(self, locs, nbins=None): - self.locs = np.asarray(locs) - _api.check_shape((None,), locs=self.locs) - self.nbins = max(nbins, 2) if nbins is not None else None - - def set_params(self, nbins=None): - """Set parameters within this locator.""" - if nbins is not None: - self.nbins = nbins - - def __call__(self): - return self.tick_values(None, None) - - def tick_values(self, vmin, vmax): - """ - Return the locations of the ticks. - - .. note:: - - Because the values are fixed, vmin and vmax are not used in this - method. - - """ - if self.nbins is None: - return self.locs - step = max(int(np.ceil(len(self.locs) / self.nbins)), 1) - ticks = self.locs[::step] - for i in range(1, step): - ticks1 = self.locs[i::step] - if np.abs(ticks1).min() < np.abs(ticks).min(): - ticks = ticks1 - return self.raise_if_exceeds(ticks) - - -class NullLocator(Locator): - """ - No ticks - """ - - def __call__(self): - return self.tick_values(None, None) - - def tick_values(self, vmin, vmax): - """ - Return the locations of the ticks. - - .. note:: - - Because the values are Null, vmin and vmax are not used in this - method. - """ - return [] - - -class LinearLocator(Locator): - """ - Determine the tick locations - - The first time this function is called it will try to set the - number of ticks to make a nice tick partitioning. Thereafter, the - number of ticks will be fixed so that interactive navigation will - be nice - - """ - def __init__(self, numticks=None, presets=None): - """ - Parameters - ---------- - numticks : int or None, default None - Number of ticks. If None, *numticks* = 11. - presets : dict or None, default: None - Dictionary mapping ``(vmin, vmax)`` to an array of locations. - Overrides *numticks* if there is an entry for the current - ``(vmin, vmax)``. - """ - self.numticks = numticks - if presets is None: - self.presets = {} - else: - self.presets = presets - - @property - def numticks(self): - # Old hard-coded default. - return self._numticks if self._numticks is not None else 11 - - @numticks.setter - def numticks(self, numticks): - self._numticks = numticks - - def set_params(self, numticks=None, presets=None): - """Set parameters within this locator.""" - if presets is not None: - self.presets = presets - if numticks is not None: - self.numticks = numticks - - def __call__(self): - """Return the locations of the ticks.""" - vmin, vmax = self.axis.get_view_interval() - return self.tick_values(vmin, vmax) - - def tick_values(self, vmin, vmax): - vmin, vmax = mtransforms.nonsingular(vmin, vmax, expander=0.05) - - if (vmin, vmax) in self.presets: - return self.presets[(vmin, vmax)] - - if self.numticks == 0: - return [] - ticklocs = np.linspace(vmin, vmax, self.numticks) - - return self.raise_if_exceeds(ticklocs) - - def view_limits(self, vmin, vmax): - """Try to choose the view limits intelligently.""" - - if vmax < vmin: - vmin, vmax = vmax, vmin - - if vmin == vmax: - vmin -= 1 - vmax += 1 - - if mpl.rcParams['axes.autolimit_mode'] == 'round_numbers': - exponent, remainder = divmod( - math.log10(vmax - vmin), math.log10(max(self.numticks - 1, 1))) - exponent -= (remainder < .5) - scale = max(self.numticks - 1, 1) ** (-exponent) - vmin = math.floor(scale * vmin) / scale - vmax = math.ceil(scale * vmax) / scale - - return mtransforms.nonsingular(vmin, vmax) - - -class MultipleLocator(Locator): - """ - Set a tick on each integer multiple of the *base* plus an *offset* within - the view interval. - """ - - def __init__(self, base=1.0, offset=0.0): - """ - Parameters - ---------- - base : float > 0 - Interval between ticks. - offset : float - Value added to each multiple of *base*. - - .. versionadded:: 3.8 - """ - self._edge = _Edge_integer(base, 0) - self._offset = offset - - def set_params(self, base=None, offset=None): - """ - Set parameters within this locator. - - Parameters - ---------- - base : float > 0 - Interval between ticks. - offset : float - Value added to each multiple of *base*. - - .. versionadded:: 3.8 - """ - if base is not None: - self._edge = _Edge_integer(base, 0) - if offset is not None: - self._offset = offset - - def __call__(self): - """Return the locations of the ticks.""" - vmin, vmax = self.axis.get_view_interval() - return self.tick_values(vmin, vmax) - - def tick_values(self, vmin, vmax): - if vmax < vmin: - vmin, vmax = vmax, vmin - step = self._edge.step - vmin -= self._offset - vmax -= self._offset - vmin = self._edge.ge(vmin) * step - n = (vmax - vmin + 0.001 * step) // step - locs = vmin - step + np.arange(n + 3) * step + self._offset - return self.raise_if_exceeds(locs) - - def view_limits(self, dmin, dmax): - """ - Set the view limits to the nearest tick values that contain the data. - """ - if mpl.rcParams['axes.autolimit_mode'] == 'round_numbers': - vmin = self._edge.le(dmin - self._offset) * self._edge.step + self._offset - vmax = self._edge.ge(dmax - self._offset) * self._edge.step + self._offset - if vmin == vmax: - vmin -= 1 - vmax += 1 - else: - vmin = dmin - vmax = dmax - - return mtransforms.nonsingular(vmin, vmax) - - -def scale_range(vmin, vmax, n=1, threshold=100): - dv = abs(vmax - vmin) # > 0 as nonsingular is called before. - meanv = (vmax + vmin) / 2 - if abs(meanv) / dv < threshold: - offset = 0 - else: - offset = math.copysign(10 ** (math.log10(abs(meanv)) // 1), meanv) - scale = 10 ** (math.log10(dv / n) // 1) - return scale, offset - - -class _Edge_integer: - """ - Helper for `.MaxNLocator`, `.MultipleLocator`, etc. - - Take floating-point precision limitations into account when calculating - tick locations as integer multiples of a step. - """ - def __init__(self, step, offset): - """ - Parameters - ---------- - step : float > 0 - Interval between ticks. - offset : float - Offset subtracted from the data limits prior to calculating tick - locations. - """ - if step <= 0: - raise ValueError("'step' must be positive") - self.step = step - self._offset = abs(offset) - - def closeto(self, ms, edge): - # Allow more slop when the offset is large compared to the step. - if self._offset > 0: - digits = np.log10(self._offset / self.step) - tol = max(1e-10, 10 ** (digits - 12)) - tol = min(0.4999, tol) - else: - tol = 1e-10 - return abs(ms - edge) < tol - - def le(self, x): - """Return the largest n: n*step <= x.""" - d, m = divmod(x, self.step) - if self.closeto(m / self.step, 1): - return d + 1 - return d - - def ge(self, x): - """Return the smallest n: n*step >= x.""" - d, m = divmod(x, self.step) - if self.closeto(m / self.step, 0): - return d - return d + 1 - - -class MaxNLocator(Locator): - """ - Find nice tick locations with no more than *nbins* + 1 being within the - view limits. Locations beyond the limits are added to support autoscaling. - """ - default_params = dict(nbins=10, - steps=None, - integer=False, - symmetric=False, - prune=None, - min_n_ticks=2) - - def __init__(self, nbins=None, **kwargs): - """ - Parameters - ---------- - nbins : int or 'auto', default: 10 - Maximum number of intervals; one less than max number of - ticks. If the string 'auto', the number of bins will be - automatically determined based on the length of the axis. - - steps : array-like, optional - Sequence of acceptable tick multiples, starting with 1 and - ending with 10. For example, if ``steps=[1, 2, 4, 5, 10]``, - ``20, 40, 60`` or ``0.4, 0.6, 0.8`` would be possible - sets of ticks because they are multiples of 2. - ``30, 60, 90`` would not be generated because 3 does not - appear in this example list of steps. - - integer : bool, default: False - If True, ticks will take only integer values, provided at least - *min_n_ticks* integers are found within the view limits. - - symmetric : bool, default: False - If True, autoscaling will result in a range symmetric about zero. - - prune : {'lower', 'upper', 'both', None}, default: None - Remove edge ticks -- useful for stacked or ganged plots where - the upper tick of one axes overlaps with the lower tick of the - axes above it, primarily when :rc:`axes.autolimit_mode` is - ``'round_numbers'``. If ``prune=='lower'``, the smallest tick will - be removed. If ``prune == 'upper'``, the largest tick will be - removed. If ``prune == 'both'``, the largest and smallest ticks - will be removed. If *prune* is *None*, no ticks will be removed. - - min_n_ticks : int, default: 2 - Relax *nbins* and *integer* constraints if necessary to obtain - this minimum number of ticks. - """ - if nbins is not None: - kwargs['nbins'] = nbins - self.set_params(**{**self.default_params, **kwargs}) - - @staticmethod - def _validate_steps(steps): - if not np.iterable(steps): - raise ValueError('steps argument must be an increasing sequence ' - 'of numbers between 1 and 10 inclusive') - steps = np.asarray(steps) - if np.any(np.diff(steps) <= 0) or steps[-1] > 10 or steps[0] < 1: - raise ValueError('steps argument must be an increasing sequence ' - 'of numbers between 1 and 10 inclusive') - if steps[0] != 1: - steps = np.concatenate([[1], steps]) - if steps[-1] != 10: - steps = np.concatenate([steps, [10]]) - return steps - - @staticmethod - def _staircase(steps): - # Make an extended staircase within which the needed step will be - # found. This is probably much larger than necessary. - return np.concatenate([0.1 * steps[:-1], steps, [10 * steps[1]]]) - - def set_params(self, **kwargs): - """ - Set parameters for this locator. - - Parameters - ---------- - nbins : int or 'auto', optional - see `.MaxNLocator` - steps : array-like, optional - see `.MaxNLocator` - integer : bool, optional - see `.MaxNLocator` - symmetric : bool, optional - see `.MaxNLocator` - prune : {'lower', 'upper', 'both', None}, optional - see `.MaxNLocator` - min_n_ticks : int, optional - see `.MaxNLocator` - """ - if 'nbins' in kwargs: - self._nbins = kwargs.pop('nbins') - if self._nbins != 'auto': - self._nbins = int(self._nbins) - if 'symmetric' in kwargs: - self._symmetric = kwargs.pop('symmetric') - if 'prune' in kwargs: - prune = kwargs.pop('prune') - _api.check_in_list(['upper', 'lower', 'both', None], prune=prune) - self._prune = prune - if 'min_n_ticks' in kwargs: - self._min_n_ticks = max(1, kwargs.pop('min_n_ticks')) - if 'steps' in kwargs: - steps = kwargs.pop('steps') - if steps is None: - self._steps = np.array([1, 1.5, 2, 2.5, 3, 4, 5, 6, 8, 10]) - else: - self._steps = self._validate_steps(steps) - self._extended_steps = self._staircase(self._steps) - if 'integer' in kwargs: - self._integer = kwargs.pop('integer') - if kwargs: - raise _api.kwarg_error("set_params", kwargs) - - def _raw_ticks(self, vmin, vmax): - """ - Generate a list of tick locations including the range *vmin* to - *vmax*. In some applications, one or both of the end locations - will not be needed, in which case they are trimmed off - elsewhere. - """ - if self._nbins == 'auto': - if self.axis is not None: - nbins = np.clip(self.axis.get_tick_space(), - max(1, self._min_n_ticks - 1), 9) - else: - nbins = 9 - else: - nbins = self._nbins - - scale, offset = scale_range(vmin, vmax, nbins) - _vmin = vmin - offset - _vmax = vmax - offset - steps = self._extended_steps * scale - if self._integer: - # For steps > 1, keep only integer values. - igood = (steps < 1) | (np.abs(steps - np.round(steps)) < 0.001) - steps = steps[igood] - - raw_step = ((_vmax - _vmin) / nbins) - large_steps = steps >= raw_step - if mpl.rcParams['axes.autolimit_mode'] == 'round_numbers': - # Classic round_numbers mode may require a larger step. - # Get first multiple of steps that are <= _vmin - floored_vmins = (_vmin // steps) * steps - floored_vmaxs = floored_vmins + steps * nbins - large_steps = large_steps & (floored_vmaxs >= _vmax) - - # Find index of smallest large step - istep = np.nonzero(large_steps)[0][0] - - # Start at smallest of the steps greater than the raw step, and check - # if it provides enough ticks. If not, work backwards through - # smaller steps until one is found that provides enough ticks. - for step in steps[:istep+1][::-1]: - - if (self._integer and - np.floor(_vmax) - np.ceil(_vmin) >= self._min_n_ticks - 1): - step = max(1, step) - best_vmin = (_vmin // step) * step - - # Find tick locations spanning the vmin-vmax range, taking into - # account degradation of precision when there is a large offset. - # The edge ticks beyond vmin and/or vmax are needed for the - # "round_numbers" autolimit mode. - edge = _Edge_integer(step, offset) - low = edge.le(_vmin - best_vmin) - high = edge.ge(_vmax - best_vmin) - ticks = np.arange(low, high + 1) * step + best_vmin - # Count only the ticks that will be displayed. - nticks = ((ticks <= _vmax) & (ticks >= _vmin)).sum() - if nticks >= self._min_n_ticks: - break - return ticks + offset - - def __call__(self): - vmin, vmax = self.axis.get_view_interval() - return self.tick_values(vmin, vmax) - - def tick_values(self, vmin, vmax): - if self._symmetric: - vmax = max(abs(vmin), abs(vmax)) - vmin = -vmax - vmin, vmax = mtransforms.nonsingular( - vmin, vmax, expander=1e-13, tiny=1e-14) - locs = self._raw_ticks(vmin, vmax) - - prune = self._prune - if prune == 'lower': - locs = locs[1:] - elif prune == 'upper': - locs = locs[:-1] - elif prune == 'both': - locs = locs[1:-1] - return self.raise_if_exceeds(locs) - - def view_limits(self, dmin, dmax): - if self._symmetric: - dmax = max(abs(dmin), abs(dmax)) - dmin = -dmax - - dmin, dmax = mtransforms.nonsingular( - dmin, dmax, expander=1e-12, tiny=1e-13) - - if mpl.rcParams['axes.autolimit_mode'] == 'round_numbers': - return self._raw_ticks(dmin, dmax)[[0, -1]] - else: - return dmin, dmax - - -def _is_decade(x, *, base=10, rtol=None): - """Return True if *x* is an integer power of *base*.""" - if not np.isfinite(x): - return False - if x == 0.0: - return True - lx = np.log(abs(x)) / np.log(base) - if rtol is None: - return np.isclose(lx, np.round(lx)) - else: - return np.isclose(lx, np.round(lx), rtol=rtol) - - -def _decade_less_equal(x, base): - """ - Return the largest integer power of *base* that's less or equal to *x*. - - If *x* is negative, the exponent will be *greater*. - """ - return (x if x == 0 else - -_decade_greater_equal(-x, base) if x < 0 else - base ** np.floor(np.log(x) / np.log(base))) - - -def _decade_greater_equal(x, base): - """ - Return the smallest integer power of *base* that's greater or equal to *x*. - - If *x* is negative, the exponent will be *smaller*. - """ - return (x if x == 0 else - -_decade_less_equal(-x, base) if x < 0 else - base ** np.ceil(np.log(x) / np.log(base))) - - -def _decade_less(x, base): - """ - Return the largest integer power of *base* that's less than *x*. - - If *x* is negative, the exponent will be *greater*. - """ - if x < 0: - return -_decade_greater(-x, base) - less = _decade_less_equal(x, base) - if less == x: - less /= base - return less - - -def _decade_greater(x, base): - """ - Return the smallest integer power of *base* that's greater than *x*. - - If *x* is negative, the exponent will be *smaller*. - """ - if x < 0: - return -_decade_less(-x, base) - greater = _decade_greater_equal(x, base) - if greater == x: - greater *= base - return greater - - -def _is_close_to_int(x): - return math.isclose(x, round(x)) - - -class LogLocator(Locator): - """ - - Determine the tick locations for log axes. - - Place ticks on the locations : ``subs[j] * base**i`` - - Parameters - ---------- - base : float, default: 10.0 - The base of the log used, so major ticks are placed at - ``base**n``, where ``n`` is an integer. - subs : None or {'auto', 'all'} or sequence of float, default: (1.0,) - Gives the multiples of integer powers of the base at which - to place ticks. The default of ``(1.0, )`` places ticks only at - integer powers of the base. - Permitted string values are ``'auto'`` and ``'all'``. - Both of these use an algorithm based on the axis view - limits to determine whether and how to put ticks between - integer powers of the base. With ``'auto'``, ticks are - placed only between integer powers; with ``'all'``, the - integer powers are included. A value of None is - equivalent to ``'auto'``. - numticks : None or int, default: None - The maximum number of ticks to allow on a given axis. The default - of ``None`` will try to choose intelligently as long as this - Locator has already been assigned to an axis using - `~.axis.Axis.get_tick_space`, but otherwise falls back to 9. - - """ - - @_api.delete_parameter("3.8", "numdecs") - def __init__(self, base=10.0, subs=(1.0,), numdecs=4, numticks=None): - """Place ticks on the locations : subs[j] * base**i.""" - if numticks is None: - if mpl.rcParams['_internal.classic_mode']: - numticks = 15 - else: - numticks = 'auto' - self._base = float(base) - self._set_subs(subs) - self._numdecs = numdecs - self.numticks = numticks - - @_api.delete_parameter("3.8", "numdecs") - def set_params(self, base=None, subs=None, numdecs=None, numticks=None): - """Set parameters within this locator.""" - if base is not None: - self._base = float(base) - if subs is not None: - self._set_subs(subs) - if numdecs is not None: - self._numdecs = numdecs - if numticks is not None: - self.numticks = numticks - - numdecs = _api.deprecate_privatize_attribute( - "3.8", addendum="This attribute has no effect.") - - def _set_subs(self, subs): - """ - Set the minor ticks for the log scaling every ``base**i*subs[j]``. - """ - if subs is None: # consistency with previous bad API - self._subs = 'auto' - elif isinstance(subs, str): - _api.check_in_list(('all', 'auto'), subs=subs) - self._subs = subs - else: - try: - self._subs = np.asarray(subs, dtype=float) - except ValueError as e: - raise ValueError("subs must be None, 'all', 'auto' or " - "a sequence of floats, not " - f"{subs}.") from e - if self._subs.ndim != 1: - raise ValueError("A sequence passed to subs must be " - "1-dimensional, not " - f"{self._subs.ndim}-dimensional.") - - def __call__(self): - """Return the locations of the ticks.""" - vmin, vmax = self.axis.get_view_interval() - return self.tick_values(vmin, vmax) - - def tick_values(self, vmin, vmax): - if self.numticks == 'auto': - if self.axis is not None: - numticks = np.clip(self.axis.get_tick_space(), 2, 9) - else: - numticks = 9 - else: - numticks = self.numticks - - b = self._base - if vmin <= 0.0: - if self.axis is not None: - vmin = self.axis.get_minpos() - - if vmin <= 0.0 or not np.isfinite(vmin): - raise ValueError( - "Data has no positive values, and therefore cannot be log-scaled.") - - _log.debug('vmin %s vmax %s', vmin, vmax) - - if vmax < vmin: - vmin, vmax = vmax, vmin - log_vmin = math.log(vmin) / math.log(b) - log_vmax = math.log(vmax) / math.log(b) - - numdec = math.floor(log_vmax) - math.ceil(log_vmin) - - if isinstance(self._subs, str): - if numdec > 10 or b < 3: - if self._subs == 'auto': - return np.array([]) # no minor or major ticks - else: - subs = np.array([1.0]) # major ticks - else: - _first = 2.0 if self._subs == 'auto' else 1.0 - subs = np.arange(_first, b) - else: - subs = self._subs - - # Get decades between major ticks. - stride = (max(math.ceil(numdec / (numticks - 1)), 1) - if mpl.rcParams['_internal.classic_mode'] else - numdec // numticks + 1) - - # if we have decided that the stride is as big or bigger than - # the range, clip the stride back to the available range - 1 - # with a floor of 1. This prevents getting axis with only 1 tick - # visible. - if stride >= numdec: - stride = max(1, numdec - 1) - - # Does subs include anything other than 1? Essentially a hack to know - # whether we're a major or a minor locator. - have_subs = len(subs) > 1 or (len(subs) == 1 and subs[0] != 1.0) - - decades = np.arange(math.floor(log_vmin) - stride, - math.ceil(log_vmax) + 2 * stride, stride) - - if have_subs: - if stride == 1: - ticklocs = np.concatenate( - [subs * decade_start for decade_start in b ** decades]) - else: - ticklocs = np.array([]) - else: - ticklocs = b ** decades - - _log.debug('ticklocs %r', ticklocs) - if (len(subs) > 1 - and stride == 1 - and ((vmin <= ticklocs) & (ticklocs <= vmax)).sum() <= 1): - # If we're a minor locator *that expects at least two ticks per - # decade* and the major locator stride is 1 and there's no more - # than one minor tick, switch to AutoLocator. - return AutoLocator().tick_values(vmin, vmax) - else: - return self.raise_if_exceeds(ticklocs) - - def view_limits(self, vmin, vmax): - """Try to choose the view limits intelligently.""" - b = self._base - - vmin, vmax = self.nonsingular(vmin, vmax) - - if mpl.rcParams['axes.autolimit_mode'] == 'round_numbers': - vmin = _decade_less_equal(vmin, b) - vmax = _decade_greater_equal(vmax, b) - - return vmin, vmax - - def nonsingular(self, vmin, vmax): - if vmin > vmax: - vmin, vmax = vmax, vmin - if not np.isfinite(vmin) or not np.isfinite(vmax): - vmin, vmax = 1, 10 # Initial range, no data plotted yet. - elif vmax <= 0: - _api.warn_external( - "Data has no positive values, and therefore cannot be " - "log-scaled.") - vmin, vmax = 1, 10 - else: - # Consider shared axises - minpos = min(axis.get_minpos() for axis in self.axis._get_shared_axis()) - if not np.isfinite(minpos): - minpos = 1e-300 # This should never take effect. - if vmin <= 0: - vmin = minpos - if vmin == vmax: - vmin = _decade_less(vmin, self._base) - vmax = _decade_greater(vmax, self._base) - return vmin, vmax - - -class SymmetricalLogLocator(Locator): - """ - Determine the tick locations for symmetric log axes. - """ - - def __init__(self, transform=None, subs=None, linthresh=None, base=None): - """ - Parameters - ---------- - transform : `~.scale.SymmetricalLogTransform`, optional - If set, defines the *base* and *linthresh* of the symlog transform. - base, linthresh : float, optional - The *base* and *linthresh* of the symlog transform, as documented - for `.SymmetricalLogScale`. These parameters are only used if - *transform* is not set. - subs : sequence of float, default: [1] - The multiples of integer powers of the base where ticks are placed, - i.e., ticks are placed at - ``[sub * base**i for i in ... for sub in subs]``. - - Notes - ----- - Either *transform*, or both *base* and *linthresh*, must be given. - """ - if transform is not None: - self._base = transform.base - self._linthresh = transform.linthresh - elif linthresh is not None and base is not None: - self._base = base - self._linthresh = linthresh - else: - raise ValueError("Either transform, or both linthresh " - "and base, must be provided.") - if subs is None: - self._subs = [1.0] - else: - self._subs = subs - self.numticks = 15 - - def set_params(self, subs=None, numticks=None): - """Set parameters within this locator.""" - if numticks is not None: - self.numticks = numticks - if subs is not None: - self._subs = subs - - def __call__(self): - """Return the locations of the ticks.""" - # Note, these are untransformed coordinates - vmin, vmax = self.axis.get_view_interval() - return self.tick_values(vmin, vmax) - - def tick_values(self, vmin, vmax): - linthresh = self._linthresh - - if vmax < vmin: - vmin, vmax = vmax, vmin - - # The domain is divided into three sections, only some of - # which may actually be present. - # - # <======== -t ==0== t ========> - # aaaaaaaaa bbbbb ccccccccc - # - # a) and c) will have ticks at integral log positions. The - # number of ticks needs to be reduced if there are more - # than self.numticks of them. - # - # b) has a tick at 0 and only 0 (we assume t is a small - # number, and the linear segment is just an implementation - # detail and not interesting.) - # - # We could also add ticks at t, but that seems to usually be - # uninteresting. - # - # "simple" mode is when the range falls entirely within [-t, t] - # -- it should just display (vmin, 0, vmax) - if -linthresh <= vmin < vmax <= linthresh: - # only the linear range is present - return sorted({vmin, 0, vmax}) - - # Lower log range is present - has_a = (vmin < -linthresh) - # Upper log range is present - has_c = (vmax > linthresh) - - # Check if linear range is present - has_b = (has_a and vmax > -linthresh) or (has_c and vmin < linthresh) - - base = self._base - - def get_log_range(lo, hi): - lo = np.floor(np.log(lo) / np.log(base)) - hi = np.ceil(np.log(hi) / np.log(base)) - return lo, hi - - # Calculate all the ranges, so we can determine striding - a_lo, a_hi = (0, 0) - if has_a: - a_upper_lim = min(-linthresh, vmax) - a_lo, a_hi = get_log_range(abs(a_upper_lim), abs(vmin) + 1) - - c_lo, c_hi = (0, 0) - if has_c: - c_lower_lim = max(linthresh, vmin) - c_lo, c_hi = get_log_range(c_lower_lim, vmax + 1) - - # Calculate the total number of integer exponents in a and c ranges - total_ticks = (a_hi - a_lo) + (c_hi - c_lo) - if has_b: - total_ticks += 1 - stride = max(total_ticks // (self.numticks - 1), 1) - - decades = [] - if has_a: - decades.extend(-1 * (base ** (np.arange(a_lo, a_hi, - stride)[::-1]))) - - if has_b: - decades.append(0.0) - - if has_c: - decades.extend(base ** (np.arange(c_lo, c_hi, stride))) - - subs = np.asarray(self._subs) - - if len(subs) > 1 or subs[0] != 1.0: - ticklocs = [] - for decade in decades: - if decade == 0: - ticklocs.append(decade) - else: - ticklocs.extend(subs * decade) - else: - ticklocs = decades - - return self.raise_if_exceeds(np.array(ticklocs)) - - def view_limits(self, vmin, vmax): - """Try to choose the view limits intelligently.""" - b = self._base - if vmax < vmin: - vmin, vmax = vmax, vmin - - if mpl.rcParams['axes.autolimit_mode'] == 'round_numbers': - vmin = _decade_less_equal(vmin, b) - vmax = _decade_greater_equal(vmax, b) - if vmin == vmax: - vmin = _decade_less(vmin, b) - vmax = _decade_greater(vmax, b) - - return mtransforms.nonsingular(vmin, vmax) - - -class AsinhLocator(Locator): - """ - An axis tick locator specialized for the inverse-sinh scale - - This is very unlikely to have any use beyond - the `~.scale.AsinhScale` class. - - .. note:: - - This API is provisional and may be revised in the future - based on early user feedback. - """ - def __init__(self, linear_width, numticks=11, symthresh=0.2, - base=10, subs=None): - """ - Parameters - ---------- - linear_width : float - The scale parameter defining the extent - of the quasi-linear region. - numticks : int, default: 11 - The approximate number of major ticks that will fit - along the entire axis - symthresh : float, default: 0.2 - The fractional threshold beneath which data which covers - a range that is approximately symmetric about zero - will have ticks that are exactly symmetric. - base : int, default: 10 - The number base used for rounding tick locations - on a logarithmic scale. If this is less than one, - then rounding is to the nearest integer multiple - of powers of ten. - subs : tuple, default: None - Multiples of the number base, typically used - for the minor ticks, e.g. (2, 5) when base=10. - """ - super().__init__() - self.linear_width = linear_width - self.numticks = numticks - self.symthresh = symthresh - self.base = base - self.subs = subs - - def set_params(self, numticks=None, symthresh=None, - base=None, subs=None): - """Set parameters within this locator.""" - if numticks is not None: - self.numticks = numticks - if symthresh is not None: - self.symthresh = symthresh - if base is not None: - self.base = base - if subs is not None: - self.subs = subs if len(subs) > 0 else None - - def __call__(self): - vmin, vmax = self.axis.get_view_interval() - if (vmin * vmax) < 0 and abs(1 + vmax / vmin) < self.symthresh: - # Data-range appears to be almost symmetric, so round up: - bound = max(abs(vmin), abs(vmax)) - return self.tick_values(-bound, bound) - else: - return self.tick_values(vmin, vmax) - - def tick_values(self, vmin, vmax): - # Construct a set of "on-screen" locations - # that are uniformly spaced: - ymin, ymax = self.linear_width * np.arcsinh(np.array([vmin, vmax]) - / self.linear_width) - ys = np.linspace(ymin, ymax, self.numticks) - zero_dev = np.abs(ys / (ymax - ymin)) - if (ymin * ymax) < 0: - # Ensure that the zero tick-mark is included, - # if the axis straddles zero - ys = np.hstack([ys[(zero_dev > 0.5 / self.numticks)], 0.0]) - - # Transform the "on-screen" grid to the data space: - xs = self.linear_width * np.sinh(ys / self.linear_width) - zero_xs = (ys == 0) - - # Round the data-space values to be intuitive base-n numbers, - # keeping track of positive and negative values separately, - # but giving careful treatment to the zero value: - if self.base > 1: - log_base = math.log(self.base) - powers = ( - np.where(zero_xs, 0, np.sign(xs)) * - np.power(self.base, - np.where(zero_xs, 0.0, - np.floor(np.log(np.abs(xs) + zero_xs*1e-6) - / log_base))) - ) - if self.subs: - qs = np.outer(powers, self.subs).flatten() - else: - qs = powers - else: - powers = ( - np.where(xs >= 0, 1, -1) * - np.power(10, np.where(zero_xs, 0.0, - np.floor(np.log10(np.abs(xs) - + zero_xs*1e-6)))) - ) - qs = powers * np.round(xs / powers) - ticks = np.array(sorted(set(qs))) - - if len(ticks) >= 2: - return ticks - else: - return np.linspace(vmin, vmax, self.numticks) - - -class LogitLocator(MaxNLocator): - """ - Determine the tick locations for logit axes - """ - - def __init__(self, minor=False, *, nbins="auto"): - """ - Place ticks on the logit locations - - Parameters - ---------- - nbins : int or 'auto', optional - Number of ticks. Only used if minor is False. - minor : bool, default: False - Indicate if this locator is for minor ticks or not. - """ - - self._minor = minor - super().__init__(nbins=nbins, steps=[1, 2, 5, 10]) - - def set_params(self, minor=None, **kwargs): - """Set parameters within this locator.""" - if minor is not None: - self._minor = minor - super().set_params(**kwargs) - - @property - def minor(self): - return self._minor - - @minor.setter - def minor(self, value): - self.set_params(minor=value) - - def tick_values(self, vmin, vmax): - # dummy axis has no axes attribute - if hasattr(self.axis, "axes") and self.axis.axes.name == "polar": - raise NotImplementedError("Polar axis cannot be logit scaled yet") - - if self._nbins == "auto": - if self.axis is not None: - nbins = self.axis.get_tick_space() - if nbins < 2: - nbins = 2 - else: - nbins = 9 - else: - nbins = self._nbins - - # We define ideal ticks with their index: - # linscale: ... 1e-3 1e-2 1e-1 1/2 1-1e-1 1-1e-2 1-1e-3 ... - # b-scale : ... -3 -2 -1 0 1 2 3 ... - def ideal_ticks(x): - return 10 ** x if x < 0 else 1 - (10 ** (-x)) if x > 0 else 0.5 - - vmin, vmax = self.nonsingular(vmin, vmax) - binf = int( - np.floor(np.log10(vmin)) - if vmin < 0.5 - else 0 - if vmin < 0.9 - else -np.ceil(np.log10(1 - vmin)) - ) - bsup = int( - np.ceil(np.log10(vmax)) - if vmax <= 0.5 - else 1 - if vmax <= 0.9 - else -np.floor(np.log10(1 - vmax)) - ) - numideal = bsup - binf - 1 - if numideal >= 2: - # have 2 or more wanted ideal ticks, so use them as major ticks - if numideal > nbins: - # to many ideal ticks, subsampling ideals for major ticks, and - # take others for minor ticks - subsampling_factor = math.ceil(numideal / nbins) - if self._minor: - ticklocs = [ - ideal_ticks(b) - for b in range(binf, bsup + 1) - if (b % subsampling_factor) != 0 - ] - else: - ticklocs = [ - ideal_ticks(b) - for b in range(binf, bsup + 1) - if (b % subsampling_factor) == 0 - ] - return self.raise_if_exceeds(np.array(ticklocs)) - if self._minor: - ticklocs = [] - for b in range(binf, bsup): - if b < -1: - ticklocs.extend(np.arange(2, 10) * 10 ** b) - elif b == -1: - ticklocs.extend(np.arange(2, 5) / 10) - elif b == 0: - ticklocs.extend(np.arange(6, 9) / 10) - else: - ticklocs.extend( - 1 - np.arange(2, 10)[::-1] * 10 ** (-b - 1) - ) - return self.raise_if_exceeds(np.array(ticklocs)) - ticklocs = [ideal_ticks(b) for b in range(binf, bsup + 1)] - return self.raise_if_exceeds(np.array(ticklocs)) - # the scale is zoomed so same ticks as linear scale can be used - if self._minor: - return [] - return super().tick_values(vmin, vmax) - - def nonsingular(self, vmin, vmax): - standard_minpos = 1e-7 - initial_range = (standard_minpos, 1 - standard_minpos) - if vmin > vmax: - vmin, vmax = vmax, vmin - if not np.isfinite(vmin) or not np.isfinite(vmax): - vmin, vmax = initial_range # Initial range, no data plotted yet. - elif vmax <= 0 or vmin >= 1: - # vmax <= 0 occurs when all values are negative - # vmin >= 1 occurs when all values are greater than one - _api.warn_external( - "Data has no values between 0 and 1, and therefore cannot be " - "logit-scaled." - ) - vmin, vmax = initial_range - else: - minpos = ( - self.axis.get_minpos() - if self.axis is not None - else standard_minpos - ) - if not np.isfinite(minpos): - minpos = standard_minpos # This should never take effect. - if vmin <= 0: - vmin = minpos - # NOTE: for vmax, we should query a property similar to get_minpos, - # but related to the maximal, less-than-one data point. - # Unfortunately, Bbox._minpos is defined very deep in the BBox and - # updated with data, so for now we use 1 - minpos as a substitute. - if vmax >= 1: - vmax = 1 - minpos - if vmin == vmax: - vmin, vmax = 0.1 * vmin, 1 - 0.1 * vmin - - return vmin, vmax - - -class AutoLocator(MaxNLocator): - """ - Dynamically find major tick positions. This is actually a subclass - of `~matplotlib.ticker.MaxNLocator`, with parameters *nbins = 'auto'* - and *steps = [1, 2, 2.5, 5, 10]*. - """ - def __init__(self): - """ - To know the values of the non-public parameters, please have a - look to the defaults of `~matplotlib.ticker.MaxNLocator`. - """ - if mpl.rcParams['_internal.classic_mode']: - nbins = 9 - steps = [1, 2, 5, 10] - else: - nbins = 'auto' - steps = [1, 2, 2.5, 5, 10] - super().__init__(nbins=nbins, steps=steps) - - -class AutoMinorLocator(Locator): - """ - Dynamically find minor tick positions based on the positions of - major ticks. The scale must be linear with major ticks evenly spaced. - """ - def __init__(self, n=None): - """ - *n* is the number of subdivisions of the interval between - major ticks; e.g., n=2 will place a single minor tick midway - between major ticks. - - If *n* is omitted or None, the value stored in rcParams will be used. - In case *n* is set to 'auto', it will be set to 4 or 5. If the distance - between the major ticks equals 1, 2.5, 5 or 10 it can be perfectly - divided in 5 equidistant sub-intervals with a length multiple of - 0.05. Otherwise it is divided in 4 sub-intervals. - """ - self.ndivs = n - - def __call__(self): - """Return the locations of the ticks.""" - if self.axis.get_scale() == 'log': - _api.warn_external('AutoMinorLocator does not work with ' - 'logarithmic scale') - return [] - - majorlocs = self.axis.get_majorticklocs() - try: - majorstep = majorlocs[1] - majorlocs[0] - except IndexError: - # Need at least two major ticks to find minor tick locations - # TODO: Figure out a way to still be able to display minor - # ticks without two major ticks visible. For now, just display - # no ticks at all. - return [] - - if self.ndivs is None: - - if self.axis.axis_name == 'y': - self.ndivs = mpl.rcParams['ytick.minor.ndivs'] - else: - # for x and z axis - self.ndivs = mpl.rcParams['xtick.minor.ndivs'] - - if self.ndivs == 'auto': - - majorstep_no_exponent = 10 ** (np.log10(majorstep) % 1) - - if np.isclose(majorstep_no_exponent, [1.0, 2.5, 5.0, 10.0]).any(): - ndivs = 5 - else: - ndivs = 4 - else: - ndivs = self.ndivs - - minorstep = majorstep / ndivs - - vmin, vmax = self.axis.get_view_interval() - if vmin > vmax: - vmin, vmax = vmax, vmin - - t0 = majorlocs[0] - tmin = round((vmin - t0) / minorstep) - tmax = round((vmax - t0) / minorstep) + 1 - locs = (np.arange(tmin, tmax) * minorstep) + t0 - - return self.raise_if_exceeds(locs) - - def tick_values(self, vmin, vmax): - raise NotImplementedError('Cannot get tick locations for a ' - '%s type.' % type(self)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/common/block.f b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/common/block.f deleted file mode 100644 index 7ea7968fe935182bd17a697b316569546937b715..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/common/block.f +++ /dev/null @@ -1,11 +0,0 @@ - SUBROUTINE INITCB - DOUBLE PRECISION LONG - CHARACTER STRING - INTEGER OK - - COMMON /BLOCK/ LONG, STRING, OK - LONG = 1.0 - STRING = '2' - OK = 3 - RETURN - END diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/series.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/series.py deleted file mode 100644 index 8ad049e173507083b346bb3fa4c07502ff4353f1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/series.py +++ /dev/null @@ -1,6303 +0,0 @@ -""" -Data structure for 1-dimensional cross-sectional and time series data -""" -from __future__ import annotations - -from collections.abc import ( - Hashable, - Iterable, - Mapping, - Sequence, -) -import operator -import sys -from textwrap import dedent -from typing import ( - IO, - TYPE_CHECKING, - Any, - Callable, - Literal, - cast, - overload, -) -import warnings -import weakref - -import numpy as np - -from pandas._config import ( - get_option, - using_copy_on_write, -) - -from pandas._libs import ( - lib, - properties, - reshape, -) -from pandas._libs.lib import is_range_indexer -from pandas.compat import PYPY -from pandas.compat._constants import REF_COUNT -from pandas.compat._optional import import_optional_dependency -from pandas.compat.numpy import function as nv -from pandas.errors import ( - ChainedAssignmentError, - InvalidIndexError, - _chained_assignment_method_msg, - _chained_assignment_msg, -) -from pandas.util._decorators import ( - Appender, - Substitution, - doc, -) -from pandas.util._exceptions import find_stack_level -from pandas.util._validators import ( - validate_ascending, - validate_bool_kwarg, - validate_percentile, -) - -from pandas.core.dtypes.astype import astype_is_view -from pandas.core.dtypes.cast import ( - LossySetitemError, - convert_dtypes, - maybe_box_native, - maybe_cast_pointwise_result, -) -from pandas.core.dtypes.common import ( - is_dict_like, - is_integer, - is_iterator, - is_list_like, - is_object_dtype, - is_scalar, - pandas_dtype, - validate_all_hashable, -) -from pandas.core.dtypes.dtypes import ( - ArrowDtype, - ExtensionDtype, -) -from pandas.core.dtypes.generic import ABCDataFrame -from pandas.core.dtypes.inference import is_hashable -from pandas.core.dtypes.missing import ( - isna, - na_value_for_dtype, - notna, - remove_na_arraylike, -) - -from pandas.core import ( - algorithms, - base, - common as com, - missing, - nanops, - ops, - roperator, -) -from pandas.core.accessor import CachedAccessor -from pandas.core.apply import SeriesApply -from pandas.core.arrays import ExtensionArray -from pandas.core.arrays.categorical import CategoricalAccessor -from pandas.core.arrays.sparse import SparseAccessor -from pandas.core.construction import ( - extract_array, - sanitize_array, -) -from pandas.core.generic import ( - NDFrame, - make_doc, -) -from pandas.core.indexers import ( - disallow_ndim_indexing, - unpack_1tuple, -) -from pandas.core.indexes.accessors import CombinedDatetimelikeProperties -from pandas.core.indexes.api import ( - DatetimeIndex, - Index, - MultiIndex, - PeriodIndex, - default_index, - ensure_index, -) -import pandas.core.indexes.base as ibase -from pandas.core.indexes.multi import maybe_droplevels -from pandas.core.indexing import ( - check_bool_indexer, - check_dict_or_set_indexers, -) -from pandas.core.internals import ( - SingleArrayManager, - SingleBlockManager, -) -from pandas.core.methods import selectn -from pandas.core.shared_docs import _shared_docs -from pandas.core.sorting import ( - ensure_key_mapped, - nargsort, -) -from pandas.core.strings.accessor import StringMethods -from pandas.core.tools.datetimes import to_datetime - -import pandas.io.formats.format as fmt -from pandas.io.formats.info import ( - INFO_DOCSTRING, - SeriesInfo, - series_sub_kwargs, -) -import pandas.plotting - -if TYPE_CHECKING: - from pandas._libs.internals import BlockValuesRefs - from pandas._typing import ( - AggFuncType, - AnyAll, - AnyArrayLike, - ArrayLike, - Axis, - AxisInt, - CorrelationMethod, - DropKeep, - Dtype, - DtypeBackend, - DtypeObj, - FilePath, - IgnoreRaise, - IndexKeyFunc, - IndexLabel, - Level, - NaPosition, - NumpySorter, - NumpyValueArrayLike, - QuantileInterpolation, - ReindexMethod, - Renamer, - Scalar, - Self, - SingleManager, - SortKind, - StorageOptions, - Suffixes, - ValueKeyFunc, - WriteBuffer, - npt, - ) - - from pandas.core.frame import DataFrame - from pandas.core.groupby.generic import SeriesGroupBy - -__all__ = ["Series"] - -_shared_doc_kwargs = { - "axes": "index", - "klass": "Series", - "axes_single_arg": "{0 or 'index'}", - "axis": """axis : {0 or 'index'} - Unused. Parameter needed for compatibility with DataFrame.""", - "inplace": """inplace : bool, default False - If True, performs operation inplace and returns None.""", - "unique": "np.ndarray", - "duplicated": "Series", - "optional_by": "", - "optional_reindex": """ -index : array-like, optional - New labels for the index. Preferably an Index object to avoid - duplicating data. -axis : int or str, optional - Unused.""", -} - - -def _coerce_method(converter): - """ - Install the scalar coercion methods. - """ - - def wrapper(self): - if len(self) == 1: - warnings.warn( - f"Calling {converter.__name__} on a single element Series is " - "deprecated and will raise a TypeError in the future. " - f"Use {converter.__name__}(ser.iloc[0]) instead", - FutureWarning, - stacklevel=find_stack_level(), - ) - return converter(self.iloc[0]) - raise TypeError(f"cannot convert the series to {converter}") - - wrapper.__name__ = f"__{converter.__name__}__" - return wrapper - - -# ---------------------------------------------------------------------- -# Series class - - -# error: Cannot override final attribute "ndim" (previously declared in base -# class "NDFrame") -# error: Cannot override final attribute "size" (previously declared in base -# class "NDFrame") -# definition in base class "NDFrame" -class Series(base.IndexOpsMixin, NDFrame): # type: ignore[misc] - """ - One-dimensional ndarray with axis labels (including time series). - - Labels need not be unique but must be a hashable type. The object - supports both integer- and label-based indexing and provides a host of - methods for performing operations involving the index. Statistical - methods from ndarray have been overridden to automatically exclude - missing data (currently represented as NaN). - - Operations between Series (+, -, /, \\*, \\*\\*) align values based on their - associated index values-- they need not be the same length. The result - index will be the sorted union of the two indexes. - - Parameters - ---------- - data : array-like, Iterable, dict, or scalar value - Contains data stored in Series. If data is a dict, argument order is - maintained. - index : array-like or Index (1d) - Values must be hashable and have the same length as `data`. - Non-unique index values are allowed. Will default to - RangeIndex (0, 1, 2, ..., n) if not provided. If data is dict-like - and index is None, then the keys in the data are used as the index. If the - index is not None, the resulting Series is reindexed with the index values. - dtype : str, numpy.dtype, or ExtensionDtype, optional - Data type for the output Series. If not specified, this will be - inferred from `data`. - See the :ref:`user guide ` for more usages. - name : Hashable, default None - The name to give to the Series. - copy : bool, default False - Copy input data. Only affects Series or 1d ndarray input. See examples. - - Notes - ----- - Please reference the :ref:`User Guide ` for more information. - - Examples - -------- - Constructing Series from a dictionary with an Index specified - - >>> d = {'a': 1, 'b': 2, 'c': 3} - >>> ser = pd.Series(data=d, index=['a', 'b', 'c']) - >>> ser - a 1 - b 2 - c 3 - dtype: int64 - - The keys of the dictionary match with the Index values, hence the Index - values have no effect. - - >>> d = {'a': 1, 'b': 2, 'c': 3} - >>> ser = pd.Series(data=d, index=['x', 'y', 'z']) - >>> ser - x NaN - y NaN - z NaN - dtype: float64 - - Note that the Index is first build with the keys from the dictionary. - After this the Series is reindexed with the given Index values, hence we - get all NaN as a result. - - Constructing Series from a list with `copy=False`. - - >>> r = [1, 2] - >>> ser = pd.Series(r, copy=False) - >>> ser.iloc[0] = 999 - >>> r - [1, 2] - >>> ser - 0 999 - 1 2 - dtype: int64 - - Due to input data type the Series has a `copy` of - the original data even though `copy=False`, so - the data is unchanged. - - Constructing Series from a 1d ndarray with `copy=False`. - - >>> r = np.array([1, 2]) - >>> ser = pd.Series(r, copy=False) - >>> ser.iloc[0] = 999 - >>> r - array([999, 2]) - >>> ser - 0 999 - 1 2 - dtype: int64 - - Due to input data type the Series has a `view` on - the original data, so - the data is changed as well. - """ - - _typ = "series" - _HANDLED_TYPES = (Index, ExtensionArray, np.ndarray) - - _name: Hashable - _metadata: list[str] = ["_name"] - _internal_names_set = {"index", "name"} | NDFrame._internal_names_set - _accessors = {"dt", "cat", "str", "sparse"} - _hidden_attrs = ( - base.IndexOpsMixin._hidden_attrs | NDFrame._hidden_attrs | frozenset([]) - ) - - # similar to __array_priority__, positions Series after DataFrame - # but before Index and ExtensionArray. Should NOT be overridden by subclasses. - __pandas_priority__ = 3000 - - # Override cache_readonly bc Series is mutable - # error: Incompatible types in assignment (expression has type "property", - # base class "IndexOpsMixin" defined the type as "Callable[[IndexOpsMixin], bool]") - hasnans = property( # type: ignore[assignment] - # error: "Callable[[IndexOpsMixin], bool]" has no attribute "fget" - base.IndexOpsMixin.hasnans.fget, # type: ignore[attr-defined] - doc=base.IndexOpsMixin.hasnans.__doc__, - ) - _mgr: SingleManager - - # ---------------------------------------------------------------------- - # Constructors - - def __init__( - self, - data=None, - index=None, - dtype: Dtype | None = None, - name=None, - copy: bool | None = None, - fastpath: bool = False, - ) -> None: - if ( - isinstance(data, (SingleBlockManager, SingleArrayManager)) - and index is None - and dtype is None - and (copy is False or copy is None) - ): - if using_copy_on_write(): - data = data.copy(deep=False) - # GH#33357 called with just the SingleBlockManager - NDFrame.__init__(self, data) - if fastpath: - # e.g. from _box_col_values, skip validation of name - object.__setattr__(self, "_name", name) - else: - self.name = name - return - - if isinstance(data, (ExtensionArray, np.ndarray)): - if copy is not False and using_copy_on_write(): - if dtype is None or astype_is_view(data.dtype, pandas_dtype(dtype)): - data = data.copy() - if copy is None: - copy = False - - # we are called internally, so short-circuit - if fastpath: - # data is a ndarray, index is defined - if not isinstance(data, (SingleBlockManager, SingleArrayManager)): - manager = get_option("mode.data_manager") - if manager == "block": - data = SingleBlockManager.from_array(data, index) - elif manager == "array": - data = SingleArrayManager.from_array(data, index) - elif using_copy_on_write() and not copy: - data = data.copy(deep=False) - if copy: - data = data.copy() - # skips validation of the name - object.__setattr__(self, "_name", name) - NDFrame.__init__(self, data) - return - - if isinstance(data, SingleBlockManager) and using_copy_on_write() and not copy: - data = data.copy(deep=False) - - name = ibase.maybe_extract_name(name, data, type(self)) - - if index is not None: - index = ensure_index(index) - - if dtype is not None: - dtype = self._validate_dtype(dtype) - - if data is None: - index = index if index is not None else default_index(0) - if len(index) or dtype is not None: - data = na_value_for_dtype(pandas_dtype(dtype), compat=False) - else: - data = [] - - if isinstance(data, MultiIndex): - raise NotImplementedError( - "initializing a Series from a MultiIndex is not supported" - ) - - refs = None - if isinstance(data, Index): - if dtype is not None: - data = data.astype(dtype, copy=False) - - if using_copy_on_write(): - refs = data._references - data = data._values - else: - # GH#24096 we need to ensure the index remains immutable - data = data._values.copy() - copy = False - - elif isinstance(data, np.ndarray): - if len(data.dtype): - # GH#13296 we are dealing with a compound dtype, which - # should be treated as 2D - raise ValueError( - "Cannot construct a Series from an ndarray with " - "compound dtype. Use DataFrame instead." - ) - elif isinstance(data, Series): - if index is None: - index = data.index - data = data._mgr.copy(deep=False) - else: - data = data.reindex(index, copy=copy) - copy = False - data = data._mgr - elif is_dict_like(data): - data, index = self._init_dict(data, index, dtype) - dtype = None - copy = False - elif isinstance(data, (SingleBlockManager, SingleArrayManager)): - if index is None: - index = data.index - elif not data.index.equals(index) or copy: - # GH#19275 SingleBlockManager input should only be called - # internally - raise AssertionError( - "Cannot pass both SingleBlockManager " - "`data` argument and a different " - "`index` argument. `copy` must be False." - ) - - elif isinstance(data, ExtensionArray): - pass - else: - data = com.maybe_iterable_to_list(data) - if is_list_like(data) and not len(data) and dtype is None: - # GH 29405: Pre-2.0, this defaulted to float. - dtype = np.dtype(object) - - if index is None: - if not is_list_like(data): - data = [data] - index = default_index(len(data)) - elif is_list_like(data): - com.require_length_match(data, index) - - # create/copy the manager - if isinstance(data, (SingleBlockManager, SingleArrayManager)): - if dtype is not None: - data = data.astype(dtype=dtype, errors="ignore", copy=copy) - elif copy: - data = data.copy() - else: - data = sanitize_array(data, index, dtype, copy) - - manager = get_option("mode.data_manager") - if manager == "block": - data = SingleBlockManager.from_array(data, index, refs=refs) - elif manager == "array": - data = SingleArrayManager.from_array(data, index) - - NDFrame.__init__(self, data) - self.name = name - self._set_axis(0, index) - - def _init_dict( - self, data, index: Index | None = None, dtype: DtypeObj | None = None - ): - """ - Derive the "_mgr" and "index" attributes of a new Series from a - dictionary input. - - Parameters - ---------- - data : dict or dict-like - Data used to populate the new Series. - index : Index or None, default None - Index for the new Series: if None, use dict keys. - dtype : np.dtype, ExtensionDtype, or None, default None - The dtype for the new Series: if None, infer from data. - - Returns - ------- - _data : BlockManager for the new Series - index : index for the new Series - """ - keys: Index | tuple - - # Looking for NaN in dict doesn't work ({np.nan : 1}[float('nan')] - # raises KeyError), so we iterate the entire dict, and align - if data: - # GH:34717, issue was using zip to extract key and values from data. - # using generators in effects the performance. - # Below is the new way of extracting the keys and values - - keys = tuple(data.keys()) - values = list(data.values()) # Generating list of values- faster way - elif index is not None: - # fastpath for Series(data=None). Just use broadcasting a scalar - # instead of reindexing. - if len(index) or dtype is not None: - values = na_value_for_dtype(pandas_dtype(dtype), compat=False) - else: - values = [] - keys = index - else: - keys, values = default_index(0), [] - - # Input is now list-like, so rely on "standard" construction: - s = Series(values, index=keys, dtype=dtype) - - # Now we just make sure the order is respected, if any - if data and index is not None: - s = s.reindex(index, copy=False) - return s._mgr, s.index - - # ---------------------------------------------------------------------- - - @property - def _constructor(self) -> Callable[..., Series]: - return Series - - def _constructor_from_mgr(self, mgr, axes): - if self._constructor is Series: - # we are pandas.Series (or a subclass that doesn't override _constructor) - ser = self._from_mgr(mgr, axes=axes) - ser._name = None # caller is responsible for setting real name - return ser - else: - assert axes is mgr.axes - return self._constructor(mgr) - - @property - def _constructor_expanddim(self) -> Callable[..., DataFrame]: - """ - Used when a manipulation result has one higher dimension as the - original, such as Series.to_frame() - """ - from pandas.core.frame import DataFrame - - return DataFrame - - def _expanddim_from_mgr(self, mgr, axes) -> DataFrame: - # https://github.com/pandas-dev/pandas/pull/52132#issuecomment-1481491828 - # This is a short-term implementation that will be replaced - # with self._constructor_expanddim._constructor_from_mgr(...) - # once downstream packages (geopandas) have had a chance to implement - # their own overrides. - # error: "Callable[..., DataFrame]" has no attribute "_from_mgr" [attr-defined] - from pandas import DataFrame - - return DataFrame._from_mgr(mgr, axes=mgr.axes) - - def _constructor_expanddim_from_mgr(self, mgr, axes): - from pandas.core.frame import DataFrame - - if self._constructor_expanddim is DataFrame: - return self._expanddim_from_mgr(mgr, axes) - assert axes is mgr.axes - return self._constructor_expanddim(mgr) - - # types - @property - def _can_hold_na(self) -> bool: - return self._mgr._can_hold_na - - # ndarray compatibility - @property - def dtype(self) -> DtypeObj: - """ - Return the dtype object of the underlying data. - - Examples - -------- - >>> s = pd.Series([1, 2, 3]) - >>> s.dtype - dtype('int64') - """ - return self._mgr.dtype - - @property - def dtypes(self) -> DtypeObj: - """ - Return the dtype object of the underlying data. - - Examples - -------- - >>> s = pd.Series([1, 2, 3]) - >>> s.dtypes - dtype('int64') - """ - # DataFrame compatibility - return self.dtype - - @property - def name(self) -> Hashable: - """ - Return the name of the Series. - - The name of a Series becomes its index or column name if it is used - to form a DataFrame. It is also used whenever displaying the Series - using the interpreter. - - Returns - ------- - label (hashable object) - The name of the Series, also the column name if part of a DataFrame. - - See Also - -------- - Series.rename : Sets the Series name when given a scalar input. - Index.name : Corresponding Index property. - - Examples - -------- - The Series name can be set initially when calling the constructor. - - >>> s = pd.Series([1, 2, 3], dtype=np.int64, name='Numbers') - >>> s - 0 1 - 1 2 - 2 3 - Name: Numbers, dtype: int64 - >>> s.name = "Integers" - >>> s - 0 1 - 1 2 - 2 3 - Name: Integers, dtype: int64 - - The name of a Series within a DataFrame is its column name. - - >>> df = pd.DataFrame([[1, 2], [3, 4], [5, 6]], - ... columns=["Odd Numbers", "Even Numbers"]) - >>> df - Odd Numbers Even Numbers - 0 1 2 - 1 3 4 - 2 5 6 - >>> df["Even Numbers"].name - 'Even Numbers' - """ - return self._name - - @name.setter - def name(self, value: Hashable) -> None: - validate_all_hashable(value, error_name=f"{type(self).__name__}.name") - object.__setattr__(self, "_name", value) - - @property - def values(self): - """ - Return Series as ndarray or ndarray-like depending on the dtype. - - .. warning:: - - We recommend using :attr:`Series.array` or - :meth:`Series.to_numpy`, depending on whether you need - a reference to the underlying data or a NumPy array. - - Returns - ------- - numpy.ndarray or ndarray-like - - See Also - -------- - Series.array : Reference to the underlying data. - Series.to_numpy : A NumPy array representing the underlying data. - - Examples - -------- - >>> pd.Series([1, 2, 3]).values - array([1, 2, 3]) - - >>> pd.Series(list('aabc')).values - array(['a', 'a', 'b', 'c'], dtype=object) - - >>> pd.Series(list('aabc')).astype('category').values - ['a', 'a', 'b', 'c'] - Categories (3, object): ['a', 'b', 'c'] - - Timezone aware datetime data is converted to UTC: - - >>> pd.Series(pd.date_range('20130101', periods=3, - ... tz='US/Eastern')).values - array(['2013-01-01T05:00:00.000000000', - '2013-01-02T05:00:00.000000000', - '2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]') - """ - return self._mgr.external_values() - - @property - def _values(self): - """ - Return the internal repr of this data (defined by Block.interval_values). - This are the values as stored in the Block (ndarray or ExtensionArray - depending on the Block class), with datetime64[ns] and timedelta64[ns] - wrapped in ExtensionArrays to match Index._values behavior. - - Differs from the public ``.values`` for certain data types, because of - historical backwards compatibility of the public attribute (e.g. period - returns object ndarray and datetimetz a datetime64[ns] ndarray for - ``.values`` while it returns an ExtensionArray for ``._values`` in those - cases). - - Differs from ``.array`` in that this still returns the numpy array if - the Block is backed by a numpy array (except for datetime64 and - timedelta64 dtypes), while ``.array`` ensures to always return an - ExtensionArray. - - Overview: - - dtype | values | _values | array | - ----------- | ------------- | ------------- | --------------------- | - Numeric | ndarray | ndarray | NumpyExtensionArray | - Category | Categorical | Categorical | Categorical | - dt64[ns] | ndarray[M8ns] | DatetimeArray | DatetimeArray | - dt64[ns tz] | ndarray[M8ns] | DatetimeArray | DatetimeArray | - td64[ns] | ndarray[m8ns] | TimedeltaArray| TimedeltaArray | - Period | ndarray[obj] | PeriodArray | PeriodArray | - Nullable | EA | EA | EA | - - """ - return self._mgr.internal_values() - - @property - def _references(self) -> BlockValuesRefs | None: - if isinstance(self._mgr, SingleArrayManager): - return None - return self._mgr._block.refs - - # error: Decorated property not supported - @Appender(base.IndexOpsMixin.array.__doc__) # type: ignore[misc] - @property - def array(self) -> ExtensionArray: - return self._mgr.array_values() - - # ops - def ravel(self, order: str = "C") -> ArrayLike: - """ - Return the flattened underlying data as an ndarray or ExtensionArray. - - Returns - ------- - numpy.ndarray or ExtensionArray - Flattened data of the Series. - - See Also - -------- - numpy.ndarray.ravel : Return a flattened array. - - Examples - -------- - >>> s = pd.Series([1, 2, 3]) - >>> s.ravel() - array([1, 2, 3]) - """ - arr = self._values.ravel(order=order) - if isinstance(arr, np.ndarray) and using_copy_on_write(): - arr.flags.writeable = False - return arr - - def __len__(self) -> int: - """ - Return the length of the Series. - """ - return len(self._mgr) - - def view(self, dtype: Dtype | None = None) -> Series: - """ - Create a new view of the Series. - - This function will return a new Series with a view of the same - underlying values in memory, optionally reinterpreted with a new data - type. The new data type must preserve the same size in bytes as to not - cause index misalignment. - - Parameters - ---------- - dtype : data type - Data type object or one of their string representations. - - Returns - ------- - Series - A new Series object as a view of the same data in memory. - - See Also - -------- - numpy.ndarray.view : Equivalent numpy function to create a new view of - the same data in memory. - - Notes - ----- - Series are instantiated with ``dtype=float64`` by default. While - ``numpy.ndarray.view()`` will return a view with the same data type as - the original array, ``Series.view()`` (without specified dtype) - will try using ``float64`` and may fail if the original data type size - in bytes is not the same. - - Examples - -------- - >>> s = pd.Series([-2, -1, 0, 1, 2], dtype='int8') - >>> s - 0 -2 - 1 -1 - 2 0 - 3 1 - 4 2 - dtype: int8 - - The 8 bit signed integer representation of `-1` is `0b11111111`, but - the same bytes represent 255 if read as an 8 bit unsigned integer: - - >>> us = s.view('uint8') - >>> us - 0 254 - 1 255 - 2 0 - 3 1 - 4 2 - dtype: uint8 - - The views share the same underlying values: - - >>> us[0] = 128 - >>> s - 0 -128 - 1 -1 - 2 0 - 3 1 - 4 2 - dtype: int8 - """ - # self.array instead of self._values so we piggyback on NumpyExtensionArray - # implementation - res_values = self.array.view(dtype) - res_ser = self._constructor(res_values, index=self.index, copy=False) - if isinstance(res_ser._mgr, SingleBlockManager): - blk = res_ser._mgr._block - blk.refs = cast("BlockValuesRefs", self._references) - blk.refs.add_reference(blk) # type: ignore[arg-type] - return res_ser.__finalize__(self, method="view") - - # ---------------------------------------------------------------------- - # NDArray Compat - def __array__(self, dtype: npt.DTypeLike | None = None) -> np.ndarray: - """ - Return the values as a NumPy array. - - Users should not call this directly. Rather, it is invoked by - :func:`numpy.array` and :func:`numpy.asarray`. - - Parameters - ---------- - dtype : str or numpy.dtype, optional - The dtype to use for the resulting NumPy array. By default, - the dtype is inferred from the data. - - Returns - ------- - numpy.ndarray - The values in the series converted to a :class:`numpy.ndarray` - with the specified `dtype`. - - See Also - -------- - array : Create a new array from data. - Series.array : Zero-copy view to the array backing the Series. - Series.to_numpy : Series method for similar behavior. - - Examples - -------- - >>> ser = pd.Series([1, 2, 3]) - >>> np.asarray(ser) - array([1, 2, 3]) - - For timezone-aware data, the timezones may be retained with - ``dtype='object'`` - - >>> tzser = pd.Series(pd.date_range('2000', periods=2, tz="CET")) - >>> np.asarray(tzser, dtype="object") - array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'), - Timestamp('2000-01-02 00:00:00+0100', tz='CET')], - dtype=object) - - Or the values may be localized to UTC and the tzinfo discarded with - ``dtype='datetime64[ns]'`` - - >>> np.asarray(tzser, dtype="datetime64[ns]") # doctest: +ELLIPSIS - array(['1999-12-31T23:00:00.000000000', ...], - dtype='datetime64[ns]') - """ - values = self._values - arr = np.asarray(values, dtype=dtype) - if using_copy_on_write() and astype_is_view(values.dtype, arr.dtype): - arr = arr.view() - arr.flags.writeable = False - return arr - - # ---------------------------------------------------------------------- - - def __column_consortium_standard__(self, *, api_version: str | None = None) -> Any: - """ - Provide entry point to the Consortium DataFrame Standard API. - - This is developed and maintained outside of pandas. - Please report any issues to https://github.com/data-apis/dataframe-api-compat. - """ - dataframe_api_compat = import_optional_dependency("dataframe_api_compat") - return ( - dataframe_api_compat.pandas_standard.convert_to_standard_compliant_column( - self, api_version=api_version - ) - ) - - # ---------------------------------------------------------------------- - # Unary Methods - - # coercion - __float__ = _coerce_method(float) - __int__ = _coerce_method(int) - - # ---------------------------------------------------------------------- - - # indexers - @property - def axes(self) -> list[Index]: - """ - Return a list of the row axis labels. - """ - return [self.index] - - # ---------------------------------------------------------------------- - # Indexing Methods - - def _ixs(self, i: int, axis: AxisInt = 0) -> Any: - """ - Return the i-th value or values in the Series by location. - - Parameters - ---------- - i : int - - Returns - ------- - scalar (int) or Series (slice, sequence) - """ - return self._values[i] - - def _slice(self, slobj: slice, axis: AxisInt = 0) -> Series: - # axis kwarg is retained for compat with NDFrame method - # _slice is *always* positional - mgr = self._mgr.get_slice(slobj, axis=axis) - out = self._constructor(mgr, fastpath=True) - return out.__finalize__(self) - - def __getitem__(self, key): - check_dict_or_set_indexers(key) - key = com.apply_if_callable(key, self) - - if key is Ellipsis: - return self - - key_is_scalar = is_scalar(key) - if isinstance(key, (list, tuple)): - key = unpack_1tuple(key) - - if is_integer(key) and self.index._should_fallback_to_positional: - warnings.warn( - # GH#50617 - "Series.__getitem__ treating keys as positions is deprecated. " - "In a future version, integer keys will always be treated " - "as labels (consistent with DataFrame behavior). To access " - "a value by position, use `ser.iloc[pos]`", - FutureWarning, - stacklevel=find_stack_level(), - ) - return self._values[key] - - elif key_is_scalar: - return self._get_value(key) - - # Convert generator to list before going through hashable part - # (We will iterate through the generator there to check for slices) - if is_iterator(key): - key = list(key) - - if is_hashable(key) and not isinstance(key, slice): - # Otherwise index.get_value will raise InvalidIndexError - try: - # For labels that don't resolve as scalars like tuples and frozensets - result = self._get_value(key) - - return result - - except (KeyError, TypeError, InvalidIndexError): - # InvalidIndexError for e.g. generator - # see test_series_getitem_corner_generator - if isinstance(key, tuple) and isinstance(self.index, MultiIndex): - # We still have the corner case where a tuple is a key - # in the first level of our MultiIndex - return self._get_values_tuple(key) - - if isinstance(key, slice): - # Do slice check before somewhat-costly is_bool_indexer - return self._getitem_slice(key) - - if com.is_bool_indexer(key): - key = check_bool_indexer(self.index, key) - key = np.asarray(key, dtype=bool) - return self._get_rows_with_mask(key) - - return self._get_with(key) - - def _get_with(self, key): - # other: fancy integer or otherwise - if isinstance(key, ABCDataFrame): - raise TypeError( - "Indexing a Series with DataFrame is not " - "supported, use the appropriate DataFrame column" - ) - elif isinstance(key, tuple): - return self._get_values_tuple(key) - - elif not is_list_like(key): - # e.g. scalars that aren't recognized by lib.is_scalar, GH#32684 - return self.loc[key] - - if not isinstance(key, (list, np.ndarray, ExtensionArray, Series, Index)): - key = list(key) - - key_type = lib.infer_dtype(key, skipna=False) - - # Note: The key_type == "boolean" case should be caught by the - # com.is_bool_indexer check in __getitem__ - if key_type == "integer": - # We need to decide whether to treat this as a positional indexer - # (i.e. self.iloc) or label-based (i.e. self.loc) - if not self.index._should_fallback_to_positional: - return self.loc[key] - else: - warnings.warn( - # GH#50617 - "Series.__getitem__ treating keys as positions is deprecated. " - "In a future version, integer keys will always be treated " - "as labels (consistent with DataFrame behavior). To access " - "a value by position, use `ser.iloc[pos]`", - FutureWarning, - stacklevel=find_stack_level(), - ) - return self.iloc[key] - - # handle the dup indexing case GH#4246 - return self.loc[key] - - def _get_values_tuple(self, key: tuple): - # mpl hackaround - if com.any_none(*key): - # mpl compat if we look up e.g. ser[:, np.newaxis]; - # see tests.series.timeseries.test_mpl_compat_hack - # the asarray is needed to avoid returning a 2D DatetimeArray - result = np.asarray(self._values[key]) - disallow_ndim_indexing(result) - return result - - if not isinstance(self.index, MultiIndex): - raise KeyError("key of type tuple not found and not a MultiIndex") - - # If key is contained, would have returned by now - indexer, new_index = self.index.get_loc_level(key) - new_ser = self._constructor(self._values[indexer], index=new_index, copy=False) - if using_copy_on_write() and isinstance(indexer, slice): - new_ser._mgr.add_references(self._mgr) # type: ignore[arg-type] - return new_ser.__finalize__(self) - - def _get_rows_with_mask(self, indexer: npt.NDArray[np.bool_]) -> Series: - new_mgr = self._mgr.get_rows_with_mask(indexer) - return self._constructor_from_mgr(new_mgr, axes=new_mgr.axes).__finalize__(self) - - def _get_value(self, label, takeable: bool = False): - """ - Quickly retrieve single value at passed index label. - - Parameters - ---------- - label : object - takeable : interpret the index as indexers, default False - - Returns - ------- - scalar value - """ - if takeable: - return self._values[label] - - # Similar to Index.get_value, but we do not fall back to positional - loc = self.index.get_loc(label) - - if is_integer(loc): - return self._values[loc] - - if isinstance(self.index, MultiIndex): - mi = self.index - new_values = self._values[loc] - if len(new_values) == 1 and mi.nlevels == 1: - # If more than one level left, we can not return a scalar - return new_values[0] - - new_index = mi[loc] - new_index = maybe_droplevels(new_index, label) - new_ser = self._constructor( - new_values, index=new_index, name=self.name, copy=False - ) - if using_copy_on_write() and isinstance(loc, slice): - new_ser._mgr.add_references(self._mgr) # type: ignore[arg-type] - return new_ser.__finalize__(self) - - else: - return self.iloc[loc] - - def __setitem__(self, key, value) -> None: - if not PYPY and using_copy_on_write(): - if sys.getrefcount(self) <= 3: - warnings.warn( - _chained_assignment_msg, ChainedAssignmentError, stacklevel=2 - ) - - check_dict_or_set_indexers(key) - key = com.apply_if_callable(key, self) - cacher_needs_updating = self._check_is_chained_assignment_possible() - - if key is Ellipsis: - key = slice(None) - - if isinstance(key, slice): - indexer = self.index._convert_slice_indexer(key, kind="getitem") - return self._set_values(indexer, value) - - try: - self._set_with_engine(key, value) - except KeyError: - # We have a scalar (or for MultiIndex or object-dtype, scalar-like) - # key that is not present in self.index. - if is_integer(key): - if not self.index._should_fallback_to_positional: - # GH#33469 - self.loc[key] = value - else: - # positional setter - # can't use _mgr.setitem_inplace yet bc could have *both* - # KeyError and then ValueError, xref GH#45070 - warnings.warn( - # GH#50617 - "Series.__setitem__ treating keys as positions is deprecated. " - "In a future version, integer keys will always be treated " - "as labels (consistent with DataFrame behavior). To set " - "a value by position, use `ser.iloc[pos] = value`", - FutureWarning, - stacklevel=find_stack_level(), - ) - self._set_values(key, value) - else: - # GH#12862 adding a new key to the Series - self.loc[key] = value - - except (TypeError, ValueError, LossySetitemError): - # The key was OK, but we cannot set the value losslessly - indexer = self.index.get_loc(key) - self._set_values(indexer, value) - - except InvalidIndexError as err: - if isinstance(key, tuple) and not isinstance(self.index, MultiIndex): - # cases with MultiIndex don't get here bc they raise KeyError - # e.g. test_basic_getitem_setitem_corner - raise KeyError( - "key of type tuple not found and not a MultiIndex" - ) from err - - if com.is_bool_indexer(key): - key = check_bool_indexer(self.index, key) - key = np.asarray(key, dtype=bool) - - if ( - is_list_like(value) - and len(value) != len(self) - and not isinstance(value, Series) - and not is_object_dtype(self.dtype) - ): - # Series will be reindexed to have matching length inside - # _where call below - # GH#44265 - indexer = key.nonzero()[0] - self._set_values(indexer, value) - return - - # otherwise with listlike other we interpret series[mask] = other - # as series[mask] = other[mask] - try: - self._where(~key, value, inplace=True) - except InvalidIndexError: - # test_where_dups - self.iloc[key] = value - return - - else: - self._set_with(key, value) - - if cacher_needs_updating: - self._maybe_update_cacher(inplace=True) - - def _set_with_engine(self, key, value) -> None: - loc = self.index.get_loc(key) - - # this is equivalent to self._values[key] = value - self._mgr.setitem_inplace(loc, value) - - def _set_with(self, key, value) -> None: - # We got here via exception-handling off of InvalidIndexError, so - # key should always be listlike at this point. - assert not isinstance(key, tuple) - - if is_iterator(key): - # Without this, the call to infer_dtype will consume the generator - key = list(key) - - if not self.index._should_fallback_to_positional: - # Regardless of the key type, we're treating it as labels - self._set_labels(key, value) - - else: - # Note: key_type == "boolean" should not occur because that - # should be caught by the is_bool_indexer check in __setitem__ - key_type = lib.infer_dtype(key, skipna=False) - - if key_type == "integer": - warnings.warn( - # GH#50617 - "Series.__setitem__ treating keys as positions is deprecated. " - "In a future version, integer keys will always be treated " - "as labels (consistent with DataFrame behavior). To set " - "a value by position, use `ser.iloc[pos] = value`", - FutureWarning, - stacklevel=find_stack_level(), - ) - self._set_values(key, value) - else: - self._set_labels(key, value) - - def _set_labels(self, key, value) -> None: - key = com.asarray_tuplesafe(key) - indexer: np.ndarray = self.index.get_indexer(key) - mask = indexer == -1 - if mask.any(): - raise KeyError(f"{key[mask]} not in index") - self._set_values(indexer, value) - - def _set_values(self, key, value) -> None: - if isinstance(key, (Index, Series)): - key = key._values - - self._mgr = self._mgr.setitem(indexer=key, value=value) - self._maybe_update_cacher() - - def _set_value(self, label, value, takeable: bool = False) -> None: - """ - Quickly set single value at passed label. - - If label is not contained, a new object is created with the label - placed at the end of the result index. - - Parameters - ---------- - label : object - Partial indexing with MultiIndex not allowed. - value : object - Scalar value. - takeable : interpret the index as indexers, default False - """ - if not takeable: - try: - loc = self.index.get_loc(label) - except KeyError: - # set using a non-recursive method - self.loc[label] = value - return - else: - loc = label - - self._set_values(loc, value) - - # ---------------------------------------------------------------------- - # Lookup Caching - - @property - def _is_cached(self) -> bool: - """Return boolean indicating if self is cached or not.""" - return getattr(self, "_cacher", None) is not None - - def _get_cacher(self): - """return my cacher or None""" - cacher = getattr(self, "_cacher", None) - if cacher is not None: - cacher = cacher[1]() - return cacher - - def _reset_cacher(self) -> None: - """ - Reset the cacher. - """ - if hasattr(self, "_cacher"): - del self._cacher - - def _set_as_cached(self, item, cacher) -> None: - """ - Set the _cacher attribute on the calling object with a weakref to - cacher. - """ - if using_copy_on_write(): - return - self._cacher = (item, weakref.ref(cacher)) - - def _clear_item_cache(self) -> None: - # no-op for Series - pass - - def _check_is_chained_assignment_possible(self) -> bool: - """ - See NDFrame._check_is_chained_assignment_possible.__doc__ - """ - if self._is_view and self._is_cached: - ref = self._get_cacher() - if ref is not None and ref._is_mixed_type: - self._check_setitem_copy(t="referent", force=True) - return True - return super()._check_is_chained_assignment_possible() - - def _maybe_update_cacher( - self, clear: bool = False, verify_is_copy: bool = True, inplace: bool = False - ) -> None: - """ - See NDFrame._maybe_update_cacher.__doc__ - """ - # for CoW, we never want to update the parent DataFrame cache - # if the Series changed, but don't keep track of any cacher - if using_copy_on_write(): - return - cacher = getattr(self, "_cacher", None) - if cacher is not None: - ref: DataFrame = cacher[1]() - - # we are trying to reference a dead referent, hence - # a copy - if ref is None: - del self._cacher - elif len(self) == len(ref) and self.name in ref.columns: - # GH#42530 self.name must be in ref.columns - # to ensure column still in dataframe - # otherwise, either self or ref has swapped in new arrays - ref._maybe_cache_changed(cacher[0], self, inplace=inplace) - else: - # GH#33675 we have swapped in a new array, so parent - # reference to self is now invalid - ref._item_cache.pop(cacher[0], None) - - super()._maybe_update_cacher( - clear=clear, verify_is_copy=verify_is_copy, inplace=inplace - ) - - # ---------------------------------------------------------------------- - # Unsorted - - def repeat(self, repeats: int | Sequence[int], axis: None = None) -> Series: - """ - Repeat elements of a Series. - - Returns a new Series where each element of the current Series - is repeated consecutively a given number of times. - - Parameters - ---------- - repeats : int or array of ints - The number of repetitions for each element. This should be a - non-negative integer. Repeating 0 times will return an empty - Series. - axis : None - Unused. Parameter needed for compatibility with DataFrame. - - Returns - ------- - Series - Newly created Series with repeated elements. - - See Also - -------- - Index.repeat : Equivalent function for Index. - numpy.repeat : Similar method for :class:`numpy.ndarray`. - - Examples - -------- - >>> s = pd.Series(['a', 'b', 'c']) - >>> s - 0 a - 1 b - 2 c - dtype: object - >>> s.repeat(2) - 0 a - 0 a - 1 b - 1 b - 2 c - 2 c - dtype: object - >>> s.repeat([1, 2, 3]) - 0 a - 1 b - 1 b - 2 c - 2 c - 2 c - dtype: object - """ - nv.validate_repeat((), {"axis": axis}) - new_index = self.index.repeat(repeats) - new_values = self._values.repeat(repeats) - return self._constructor(new_values, index=new_index, copy=False).__finalize__( - self, method="repeat" - ) - - @overload - def reset_index( - self, - level: IndexLabel = ..., - *, - drop: Literal[False] = ..., - name: Level = ..., - inplace: Literal[False] = ..., - allow_duplicates: bool = ..., - ) -> DataFrame: - ... - - @overload - def reset_index( - self, - level: IndexLabel = ..., - *, - drop: Literal[True], - name: Level = ..., - inplace: Literal[False] = ..., - allow_duplicates: bool = ..., - ) -> Series: - ... - - @overload - def reset_index( - self, - level: IndexLabel = ..., - *, - drop: bool = ..., - name: Level = ..., - inplace: Literal[True], - allow_duplicates: bool = ..., - ) -> None: - ... - - def reset_index( - self, - level: IndexLabel | None = None, - *, - drop: bool = False, - name: Level = lib.no_default, - inplace: bool = False, - allow_duplicates: bool = False, - ) -> DataFrame | Series | None: - """ - Generate a new DataFrame or Series with the index reset. - - This is useful when the index needs to be treated as a column, or - when the index is meaningless and needs to be reset to the default - before another operation. - - Parameters - ---------- - level : int, str, tuple, or list, default optional - For a Series with a MultiIndex, only remove the specified levels - from the index. Removes all levels by default. - drop : bool, default False - Just reset the index, without inserting it as a column in - the new DataFrame. - name : object, optional - The name to use for the column containing the original Series - values. Uses ``self.name`` by default. This argument is ignored - when `drop` is True. - inplace : bool, default False - Modify the Series in place (do not create a new object). - allow_duplicates : bool, default False - Allow duplicate column labels to be created. - - .. versionadded:: 1.5.0 - - Returns - ------- - Series or DataFrame or None - When `drop` is False (the default), a DataFrame is returned. - The newly created columns will come first in the DataFrame, - followed by the original Series values. - When `drop` is True, a `Series` is returned. - In either case, if ``inplace=True``, no value is returned. - - See Also - -------- - DataFrame.reset_index: Analogous function for DataFrame. - - Examples - -------- - >>> s = pd.Series([1, 2, 3, 4], name='foo', - ... index=pd.Index(['a', 'b', 'c', 'd'], name='idx')) - - Generate a DataFrame with default index. - - >>> s.reset_index() - idx foo - 0 a 1 - 1 b 2 - 2 c 3 - 3 d 4 - - To specify the name of the new column use `name`. - - >>> s.reset_index(name='values') - idx values - 0 a 1 - 1 b 2 - 2 c 3 - 3 d 4 - - To generate a new Series with the default set `drop` to True. - - >>> s.reset_index(drop=True) - 0 1 - 1 2 - 2 3 - 3 4 - Name: foo, dtype: int64 - - The `level` parameter is interesting for Series with a multi-level - index. - - >>> arrays = [np.array(['bar', 'bar', 'baz', 'baz']), - ... np.array(['one', 'two', 'one', 'two'])] - >>> s2 = pd.Series( - ... range(4), name='foo', - ... index=pd.MultiIndex.from_arrays(arrays, - ... names=['a', 'b'])) - - To remove a specific level from the Index, use `level`. - - >>> s2.reset_index(level='a') - a foo - b - one bar 0 - two bar 1 - one baz 2 - two baz 3 - - If `level` is not set, all levels are removed from the Index. - - >>> s2.reset_index() - a b foo - 0 bar one 0 - 1 bar two 1 - 2 baz one 2 - 3 baz two 3 - """ - inplace = validate_bool_kwarg(inplace, "inplace") - if drop: - new_index = default_index(len(self)) - if level is not None: - level_list: Sequence[Hashable] - if not isinstance(level, (tuple, list)): - level_list = [level] - else: - level_list = level - level_list = [self.index._get_level_number(lev) for lev in level_list] - if len(level_list) < self.index.nlevels: - new_index = self.index.droplevel(level_list) - - if inplace: - self.index = new_index - elif using_copy_on_write(): - new_ser = self.copy(deep=False) - new_ser.index = new_index - return new_ser.__finalize__(self, method="reset_index") - else: - return self._constructor( - self._values.copy(), index=new_index, copy=False - ).__finalize__(self, method="reset_index") - elif inplace: - raise TypeError( - "Cannot reset_index inplace on a Series to create a DataFrame" - ) - else: - if name is lib.no_default: - # For backwards compatibility, keep columns as [0] instead of - # [None] when self.name is None - if self.name is None: - name = 0 - else: - name = self.name - - df = self.to_frame(name) - return df.reset_index( - level=level, drop=drop, allow_duplicates=allow_duplicates - ) - return None - - # ---------------------------------------------------------------------- - # Rendering Methods - - def __repr__(self) -> str: - """ - Return a string representation for a particular Series. - """ - # pylint: disable=invalid-repr-returned - repr_params = fmt.get_series_repr_params() - return self.to_string(**repr_params) - - @overload - def to_string( - self, - buf: None = ..., - na_rep: str = ..., - float_format: str | None = ..., - header: bool = ..., - index: bool = ..., - length: bool = ..., - dtype=..., - name=..., - max_rows: int | None = ..., - min_rows: int | None = ..., - ) -> str: - ... - - @overload - def to_string( - self, - buf: FilePath | WriteBuffer[str], - na_rep: str = ..., - float_format: str | None = ..., - header: bool = ..., - index: bool = ..., - length: bool = ..., - dtype=..., - name=..., - max_rows: int | None = ..., - min_rows: int | None = ..., - ) -> None: - ... - - def to_string( - self, - buf: FilePath | WriteBuffer[str] | None = None, - na_rep: str = "NaN", - float_format: str | None = None, - header: bool = True, - index: bool = True, - length: bool = False, - dtype: bool = False, - name: bool = False, - max_rows: int | None = None, - min_rows: int | None = None, - ) -> str | None: - """ - Render a string representation of the Series. - - Parameters - ---------- - buf : StringIO-like, optional - Buffer to write to. - na_rep : str, optional - String representation of NaN to use, default 'NaN'. - float_format : one-parameter function, optional - Formatter function to apply to columns' elements if they are - floats, default None. - header : bool, default True - Add the Series header (index name). - index : bool, optional - Add index (row) labels, default True. - length : bool, default False - Add the Series length. - dtype : bool, default False - Add the Series dtype. - name : bool, default False - Add the Series name if not None. - max_rows : int, optional - Maximum number of rows to show before truncating. If None, show - all. - min_rows : int, optional - The number of rows to display in a truncated repr (when number - of rows is above `max_rows`). - - Returns - ------- - str or None - String representation of Series if ``buf=None``, otherwise None. - - Examples - -------- - >>> ser = pd.Series([1, 2, 3]).to_string() - >>> ser - '0 1\\n1 2\\n2 3' - """ - formatter = fmt.SeriesFormatter( - self, - name=name, - length=length, - header=header, - index=index, - dtype=dtype, - na_rep=na_rep, - float_format=float_format, - min_rows=min_rows, - max_rows=max_rows, - ) - result = formatter.to_string() - - # catch contract violations - if not isinstance(result, str): - raise AssertionError( - "result must be of type str, type " - f"of result is {repr(type(result).__name__)}" - ) - - if buf is None: - return result - else: - if hasattr(buf, "write"): - buf.write(result) - else: - with open(buf, "w", encoding="utf-8") as f: - f.write(result) - return None - - @doc( - klass=_shared_doc_kwargs["klass"], - storage_options=_shared_docs["storage_options"], - examples=dedent( - """Examples - -------- - >>> s = pd.Series(["elk", "pig", "dog", "quetzal"], name="animal") - >>> print(s.to_markdown()) - | | animal | - |---:|:---------| - | 0 | elk | - | 1 | pig | - | 2 | dog | - | 3 | quetzal | - - Output markdown with a tabulate option. - - >>> print(s.to_markdown(tablefmt="grid")) - +----+----------+ - | | animal | - +====+==========+ - | 0 | elk | - +----+----------+ - | 1 | pig | - +----+----------+ - | 2 | dog | - +----+----------+ - | 3 | quetzal | - +----+----------+""" - ), - ) - def to_markdown( - self, - buf: IO[str] | None = None, - mode: str = "wt", - index: bool = True, - storage_options: StorageOptions | None = None, - **kwargs, - ) -> str | None: - """ - Print {klass} in Markdown-friendly format. - - Parameters - ---------- - buf : str, Path or StringIO-like, optional, default None - Buffer to write to. If None, the output is returned as a string. - mode : str, optional - Mode in which file is opened, "wt" by default. - index : bool, optional, default True - Add index (row) labels. - - {storage_options} - - .. versionadded:: 1.2.0 - - **kwargs - These parameters will be passed to `tabulate \ - `_. - - Returns - ------- - str - {klass} in Markdown-friendly format. - - Notes - ----- - Requires the `tabulate `_ package. - - {examples} - """ - return self.to_frame().to_markdown( - buf, mode, index, storage_options=storage_options, **kwargs - ) - - # ---------------------------------------------------------------------- - - def items(self) -> Iterable[tuple[Hashable, Any]]: - """ - Lazily iterate over (index, value) tuples. - - This method returns an iterable tuple (index, value). This is - convenient if you want to create a lazy iterator. - - Returns - ------- - iterable - Iterable of tuples containing the (index, value) pairs from a - Series. - - See Also - -------- - DataFrame.items : Iterate over (column name, Series) pairs. - DataFrame.iterrows : Iterate over DataFrame rows as (index, Series) pairs. - - Examples - -------- - >>> s = pd.Series(['A', 'B', 'C']) - >>> for index, value in s.items(): - ... print(f"Index : {index}, Value : {value}") - Index : 0, Value : A - Index : 1, Value : B - Index : 2, Value : C - """ - return zip(iter(self.index), iter(self)) - - # ---------------------------------------------------------------------- - # Misc public methods - - def keys(self) -> Index: - """ - Return alias for index. - - Returns - ------- - Index - Index of the Series. - - Examples - -------- - >>> s = pd.Series([1, 2, 3], index=[0, 1, 2]) - >>> s.keys() - Index([0, 1, 2], dtype='int64') - """ - return self.index - - def to_dict(self, into: type[dict] = dict) -> dict: - """ - Convert Series to {label -> value} dict or dict-like object. - - Parameters - ---------- - into : class, default dict - The collections.abc.Mapping subclass to use as the return - object. Can be the actual class or an empty - instance of the mapping type you want. If you want a - collections.defaultdict, you must pass it initialized. - - Returns - ------- - collections.abc.Mapping - Key-value representation of Series. - - Examples - -------- - >>> s = pd.Series([1, 2, 3, 4]) - >>> s.to_dict() - {0: 1, 1: 2, 2: 3, 3: 4} - >>> from collections import OrderedDict, defaultdict - >>> s.to_dict(OrderedDict) - OrderedDict([(0, 1), (1, 2), (2, 3), (3, 4)]) - >>> dd = defaultdict(list) - >>> s.to_dict(dd) - defaultdict(, {0: 1, 1: 2, 2: 3, 3: 4}) - """ - # GH16122 - into_c = com.standardize_mapping(into) - - if is_object_dtype(self.dtype) or isinstance(self.dtype, ExtensionDtype): - return into_c((k, maybe_box_native(v)) for k, v in self.items()) - else: - # Not an object dtype => all types will be the same so let the default - # indexer return native python type - return into_c(self.items()) - - def to_frame(self, name: Hashable = lib.no_default) -> DataFrame: - """ - Convert Series to DataFrame. - - Parameters - ---------- - name : object, optional - The passed name should substitute for the series name (if it has - one). - - Returns - ------- - DataFrame - DataFrame representation of Series. - - Examples - -------- - >>> s = pd.Series(["a", "b", "c"], - ... name="vals") - >>> s.to_frame() - vals - 0 a - 1 b - 2 c - """ - columns: Index - if name is lib.no_default: - name = self.name - if name is None: - # default to [0], same as we would get with DataFrame(self) - columns = default_index(1) - else: - columns = Index([name]) - else: - columns = Index([name]) - - mgr = self._mgr.to_2d_mgr(columns) - df = self._constructor_expanddim_from_mgr(mgr, axes=mgr.axes) - return df.__finalize__(self, method="to_frame") - - def _set_name( - self, name, inplace: bool = False, deep: bool | None = None - ) -> Series: - """ - Set the Series name. - - Parameters - ---------- - name : str - inplace : bool - Whether to modify `self` directly or return a copy. - deep : bool|None, default None - Whether to do a deep copy, a shallow copy, or Copy on Write(None) - """ - inplace = validate_bool_kwarg(inplace, "inplace") - ser = self if inplace else self.copy(deep and not using_copy_on_write()) - ser.name = name - return ser - - @Appender( - dedent( - """ - Examples - -------- - >>> ser = pd.Series([390., 350., 30., 20.], - ... index=['Falcon', 'Falcon', 'Parrot', 'Parrot'], - ... name="Max Speed") - >>> ser - Falcon 390.0 - Falcon 350.0 - Parrot 30.0 - Parrot 20.0 - Name: Max Speed, dtype: float64 - >>> ser.groupby(["a", "b", "a", "b"]).mean() - a 210.0 - b 185.0 - Name: Max Speed, dtype: float64 - >>> ser.groupby(level=0).mean() - Falcon 370.0 - Parrot 25.0 - Name: Max Speed, dtype: float64 - >>> ser.groupby(ser > 100).mean() - Max Speed - False 25.0 - True 370.0 - Name: Max Speed, dtype: float64 - - **Grouping by Indexes** - - We can groupby different levels of a hierarchical index - using the `level` parameter: - - >>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'], - ... ['Captive', 'Wild', 'Captive', 'Wild']] - >>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type')) - >>> ser = pd.Series([390., 350., 30., 20.], index=index, name="Max Speed") - >>> ser - Animal Type - Falcon Captive 390.0 - Wild 350.0 - Parrot Captive 30.0 - Wild 20.0 - Name: Max Speed, dtype: float64 - >>> ser.groupby(level=0).mean() - Animal - Falcon 370.0 - Parrot 25.0 - Name: Max Speed, dtype: float64 - >>> ser.groupby(level="Type").mean() - Type - Captive 210.0 - Wild 185.0 - Name: Max Speed, dtype: float64 - - We can also choose to include `NA` in group keys or not by defining - `dropna` parameter, the default setting is `True`. - - >>> ser = pd.Series([1, 2, 3, 3], index=["a", 'a', 'b', np.nan]) - >>> ser.groupby(level=0).sum() - a 3 - b 3 - dtype: int64 - - >>> ser.groupby(level=0, dropna=False).sum() - a 3 - b 3 - NaN 3 - dtype: int64 - - >>> arrays = ['Falcon', 'Falcon', 'Parrot', 'Parrot'] - >>> ser = pd.Series([390., 350., 30., 20.], index=arrays, name="Max Speed") - >>> ser.groupby(["a", "b", "a", np.nan]).mean() - a 210.0 - b 350.0 - Name: Max Speed, dtype: float64 - - >>> ser.groupby(["a", "b", "a", np.nan], dropna=False).mean() - a 210.0 - b 350.0 - NaN 20.0 - Name: Max Speed, dtype: float64 - """ - ) - ) - @Appender(_shared_docs["groupby"] % _shared_doc_kwargs) - def groupby( - self, - by=None, - axis: Axis = 0, - level: IndexLabel | None = None, - as_index: bool = True, - sort: bool = True, - group_keys: bool = True, - observed: bool | lib.NoDefault = lib.no_default, - dropna: bool = True, - ) -> SeriesGroupBy: - from pandas.core.groupby.generic import SeriesGroupBy - - if level is None and by is None: - raise TypeError("You have to supply one of 'by' and 'level'") - if not as_index: - raise TypeError("as_index=False only valid with DataFrame") - axis = self._get_axis_number(axis) - - return SeriesGroupBy( - obj=self, - keys=by, - axis=axis, - level=level, - as_index=as_index, - sort=sort, - group_keys=group_keys, - observed=observed, - dropna=dropna, - ) - - # ---------------------------------------------------------------------- - # Statistics, overridden ndarray methods - - # TODO: integrate bottleneck - def count(self): - """ - Return number of non-NA/null observations in the Series. - - Returns - ------- - int or Series (if level specified) - Number of non-null values in the Series. - - See Also - -------- - DataFrame.count : Count non-NA cells for each column or row. - - Examples - -------- - >>> s = pd.Series([0.0, 1.0, np.nan]) - >>> s.count() - 2 - """ - return notna(self._values).sum().astype("int64") - - def mode(self, dropna: bool = True) -> Series: - """ - Return the mode(s) of the Series. - - The mode is the value that appears most often. There can be multiple modes. - - Always returns Series even if only one value is returned. - - Parameters - ---------- - dropna : bool, default True - Don't consider counts of NaN/NaT. - - Returns - ------- - Series - Modes of the Series in sorted order. - - Examples - -------- - >>> s = pd.Series([2, 4, 2, 2, 4, None]) - >>> s.mode() - 0 2.0 - dtype: float64 - - More than one mode: - - >>> s = pd.Series([2, 4, 8, 2, 4, None]) - >>> s.mode() - 0 2.0 - 1 4.0 - dtype: float64 - - With and without considering null value: - - >>> s = pd.Series([2, 4, None, None, 4, None]) - >>> s.mode(dropna=False) - 0 NaN - dtype: float64 - >>> s = pd.Series([2, 4, None, None, 4, None]) - >>> s.mode() - 0 4.0 - dtype: float64 - """ - # TODO: Add option for bins like value_counts() - values = self._values - if isinstance(values, np.ndarray): - res_values = algorithms.mode(values, dropna=dropna) - else: - res_values = values._mode(dropna=dropna) - - # Ensure index is type stable (should always use int index) - return self._constructor( - res_values, index=range(len(res_values)), name=self.name, copy=False - ).__finalize__(self, method="mode") - - def unique(self) -> ArrayLike: # pylint: disable=useless-parent-delegation - """ - Return unique values of Series object. - - Uniques are returned in order of appearance. Hash table-based unique, - therefore does NOT sort. - - Returns - ------- - ndarray or ExtensionArray - The unique values returned as a NumPy array. See Notes. - - See Also - -------- - Series.drop_duplicates : Return Series with duplicate values removed. - unique : Top-level unique method for any 1-d array-like object. - Index.unique : Return Index with unique values from an Index object. - - Notes - ----- - Returns the unique values as a NumPy array. In case of an - extension-array backed Series, a new - :class:`~api.extensions.ExtensionArray` of that type with just - the unique values is returned. This includes - - * Categorical - * Period - * Datetime with Timezone - * Datetime without Timezone - * Timedelta - * Interval - * Sparse - * IntegerNA - - See Examples section. - - Examples - -------- - >>> pd.Series([2, 1, 3, 3], name='A').unique() - array([2, 1, 3]) - - >>> pd.Series([pd.Timestamp('2016-01-01') for _ in range(3)]).unique() - - ['2016-01-01 00:00:00'] - Length: 1, dtype: datetime64[ns] - - >>> pd.Series([pd.Timestamp('2016-01-01', tz='US/Eastern') - ... for _ in range(3)]).unique() - - ['2016-01-01 00:00:00-05:00'] - Length: 1, dtype: datetime64[ns, US/Eastern] - - An Categorical will return categories in the order of - appearance and with the same dtype. - - >>> pd.Series(pd.Categorical(list('baabc'))).unique() - ['b', 'a', 'c'] - Categories (3, object): ['a', 'b', 'c'] - >>> pd.Series(pd.Categorical(list('baabc'), categories=list('abc'), - ... ordered=True)).unique() - ['b', 'a', 'c'] - Categories (3, object): ['a' < 'b' < 'c'] - """ - return super().unique() - - @overload - def drop_duplicates( - self, - *, - keep: DropKeep = ..., - inplace: Literal[False] = ..., - ignore_index: bool = ..., - ) -> Series: - ... - - @overload - def drop_duplicates( - self, *, keep: DropKeep = ..., inplace: Literal[True], ignore_index: bool = ... - ) -> None: - ... - - @overload - def drop_duplicates( - self, *, keep: DropKeep = ..., inplace: bool = ..., ignore_index: bool = ... - ) -> Series | None: - ... - - def drop_duplicates( - self, - *, - keep: DropKeep = "first", - inplace: bool = False, - ignore_index: bool = False, - ) -> Series | None: - """ - Return Series with duplicate values removed. - - Parameters - ---------- - keep : {'first', 'last', ``False``}, default 'first' - Method to handle dropping duplicates: - - - 'first' : Drop duplicates except for the first occurrence. - - 'last' : Drop duplicates except for the last occurrence. - - ``False`` : Drop all duplicates. - - inplace : bool, default ``False`` - If ``True``, performs operation inplace and returns None. - - ignore_index : bool, default ``False`` - If ``True``, the resulting axis will be labeled 0, 1, …, n - 1. - - .. versionadded:: 2.0.0 - - Returns - ------- - Series or None - Series with duplicates dropped or None if ``inplace=True``. - - See Also - -------- - Index.drop_duplicates : Equivalent method on Index. - DataFrame.drop_duplicates : Equivalent method on DataFrame. - Series.duplicated : Related method on Series, indicating duplicate - Series values. - Series.unique : Return unique values as an array. - - Examples - -------- - Generate a Series with duplicated entries. - - >>> s = pd.Series(['llama', 'cow', 'llama', 'beetle', 'llama', 'hippo'], - ... name='animal') - >>> s - 0 llama - 1 cow - 2 llama - 3 beetle - 4 llama - 5 hippo - Name: animal, dtype: object - - With the 'keep' parameter, the selection behaviour of duplicated values - can be changed. The value 'first' keeps the first occurrence for each - set of duplicated entries. The default value of keep is 'first'. - - >>> s.drop_duplicates() - 0 llama - 1 cow - 3 beetle - 5 hippo - Name: animal, dtype: object - - The value 'last' for parameter 'keep' keeps the last occurrence for - each set of duplicated entries. - - >>> s.drop_duplicates(keep='last') - 1 cow - 3 beetle - 4 llama - 5 hippo - Name: animal, dtype: object - - The value ``False`` for parameter 'keep' discards all sets of - duplicated entries. - - >>> s.drop_duplicates(keep=False) - 1 cow - 3 beetle - 5 hippo - Name: animal, dtype: object - """ - inplace = validate_bool_kwarg(inplace, "inplace") - result = super().drop_duplicates(keep=keep) - - if ignore_index: - result.index = default_index(len(result)) - - if inplace: - self._update_inplace(result) - return None - else: - return result - - def duplicated(self, keep: DropKeep = "first") -> Series: - """ - Indicate duplicate Series values. - - Duplicated values are indicated as ``True`` values in the resulting - Series. Either all duplicates, all except the first or all except the - last occurrence of duplicates can be indicated. - - Parameters - ---------- - keep : {'first', 'last', False}, default 'first' - Method to handle dropping duplicates: - - - 'first' : Mark duplicates as ``True`` except for the first - occurrence. - - 'last' : Mark duplicates as ``True`` except for the last - occurrence. - - ``False`` : Mark all duplicates as ``True``. - - Returns - ------- - Series[bool] - Series indicating whether each value has occurred in the - preceding values. - - See Also - -------- - Index.duplicated : Equivalent method on pandas.Index. - DataFrame.duplicated : Equivalent method on pandas.DataFrame. - Series.drop_duplicates : Remove duplicate values from Series. - - Examples - -------- - By default, for each set of duplicated values, the first occurrence is - set on False and all others on True: - - >>> animals = pd.Series(['llama', 'cow', 'llama', 'beetle', 'llama']) - >>> animals.duplicated() - 0 False - 1 False - 2 True - 3 False - 4 True - dtype: bool - - which is equivalent to - - >>> animals.duplicated(keep='first') - 0 False - 1 False - 2 True - 3 False - 4 True - dtype: bool - - By using 'last', the last occurrence of each set of duplicated values - is set on False and all others on True: - - >>> animals.duplicated(keep='last') - 0 True - 1 False - 2 True - 3 False - 4 False - dtype: bool - - By setting keep on ``False``, all duplicates are True: - - >>> animals.duplicated(keep=False) - 0 True - 1 False - 2 True - 3 False - 4 True - dtype: bool - """ - res = self._duplicated(keep=keep) - result = self._constructor(res, index=self.index, copy=False) - return result.__finalize__(self, method="duplicated") - - def idxmin(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Hashable: - """ - Return the row label of the minimum value. - - If multiple values equal the minimum, the first row label with that - value is returned. - - Parameters - ---------- - axis : {0 or 'index'} - Unused. Parameter needed for compatibility with DataFrame. - skipna : bool, default True - Exclude NA/null values. If the entire Series is NA, the result - will be NA. - *args, **kwargs - Additional arguments and keywords have no effect but might be - accepted for compatibility with NumPy. - - Returns - ------- - Index - Label of the minimum value. - - Raises - ------ - ValueError - If the Series is empty. - - See Also - -------- - numpy.argmin : Return indices of the minimum values - along the given axis. - DataFrame.idxmin : Return index of first occurrence of minimum - over requested axis. - Series.idxmax : Return index *label* of the first occurrence - of maximum of values. - - Notes - ----- - This method is the Series version of ``ndarray.argmin``. This method - returns the label of the minimum, while ``ndarray.argmin`` returns - the position. To get the position, use ``series.values.argmin()``. - - Examples - -------- - >>> s = pd.Series(data=[1, None, 4, 1], - ... index=['A', 'B', 'C', 'D']) - >>> s - A 1.0 - B NaN - C 4.0 - D 1.0 - dtype: float64 - - >>> s.idxmin() - 'A' - - If `skipna` is False and there is an NA value in the data, - the function returns ``nan``. - - >>> s.idxmin(skipna=False) - nan - """ - axis = self._get_axis_number(axis) - with warnings.catch_warnings(): - # TODO(3.0): this catching/filtering can be removed - # ignore warning produced by argmin since we will issue a different - # warning for idxmin - warnings.simplefilter("ignore") - i = self.argmin(axis, skipna, *args, **kwargs) - - if i == -1: - # GH#43587 give correct NA value for Index. - warnings.warn( - f"The behavior of {type(self).__name__}.idxmin with all-NA " - "values, or any-NA and skipna=False, is deprecated. In a future " - "version this will raise ValueError", - FutureWarning, - stacklevel=find_stack_level(), - ) - return self.index._na_value - return self.index[i] - - def idxmax(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Hashable: - """ - Return the row label of the maximum value. - - If multiple values equal the maximum, the first row label with that - value is returned. - - Parameters - ---------- - axis : {0 or 'index'} - Unused. Parameter needed for compatibility with DataFrame. - skipna : bool, default True - Exclude NA/null values. If the entire Series is NA, the result - will be NA. - *args, **kwargs - Additional arguments and keywords have no effect but might be - accepted for compatibility with NumPy. - - Returns - ------- - Index - Label of the maximum value. - - Raises - ------ - ValueError - If the Series is empty. - - See Also - -------- - numpy.argmax : Return indices of the maximum values - along the given axis. - DataFrame.idxmax : Return index of first occurrence of maximum - over requested axis. - Series.idxmin : Return index *label* of the first occurrence - of minimum of values. - - Notes - ----- - This method is the Series version of ``ndarray.argmax``. This method - returns the label of the maximum, while ``ndarray.argmax`` returns - the position. To get the position, use ``series.values.argmax()``. - - Examples - -------- - >>> s = pd.Series(data=[1, None, 4, 3, 4], - ... index=['A', 'B', 'C', 'D', 'E']) - >>> s - A 1.0 - B NaN - C 4.0 - D 3.0 - E 4.0 - dtype: float64 - - >>> s.idxmax() - 'C' - - If `skipna` is False and there is an NA value in the data, - the function returns ``nan``. - - >>> s.idxmax(skipna=False) - nan - """ - axis = self._get_axis_number(axis) - with warnings.catch_warnings(): - # TODO(3.0): this catching/filtering can be removed - # ignore warning produced by argmax since we will issue a different - # warning for argmax - warnings.simplefilter("ignore") - i = self.argmax(axis, skipna, *args, **kwargs) - - if i == -1: - # GH#43587 give correct NA value for Index. - warnings.warn( - f"The behavior of {type(self).__name__}.idxmax with all-NA " - "values, or any-NA and skipna=False, is deprecated. In a future " - "version this will raise ValueError", - FutureWarning, - stacklevel=find_stack_level(), - ) - return self.index._na_value - return self.index[i] - - def round(self, decimals: int = 0, *args, **kwargs) -> Series: - """ - Round each value in a Series to the given number of decimals. - - Parameters - ---------- - decimals : int, default 0 - Number of decimal places to round to. If decimals is negative, - it specifies the number of positions to the left of the decimal point. - *args, **kwargs - Additional arguments and keywords have no effect but might be - accepted for compatibility with NumPy. - - Returns - ------- - Series - Rounded values of the Series. - - See Also - -------- - numpy.around : Round values of an np.array. - DataFrame.round : Round values of a DataFrame. - - Examples - -------- - >>> s = pd.Series([0.1, 1.3, 2.7]) - >>> s.round() - 0 0.0 - 1 1.0 - 2 3.0 - dtype: float64 - """ - nv.validate_round(args, kwargs) - result = self._values.round(decimals) - result = self._constructor(result, index=self.index, copy=False).__finalize__( - self, method="round" - ) - - return result - - @overload - def quantile( - self, q: float = ..., interpolation: QuantileInterpolation = ... - ) -> float: - ... - - @overload - def quantile( - self, - q: Sequence[float] | AnyArrayLike, - interpolation: QuantileInterpolation = ..., - ) -> Series: - ... - - @overload - def quantile( - self, - q: float | Sequence[float] | AnyArrayLike = ..., - interpolation: QuantileInterpolation = ..., - ) -> float | Series: - ... - - def quantile( - self, - q: float | Sequence[float] | AnyArrayLike = 0.5, - interpolation: QuantileInterpolation = "linear", - ) -> float | Series: - """ - Return value at the given quantile. - - Parameters - ---------- - q : float or array-like, default 0.5 (50% quantile) - The quantile(s) to compute, which can lie in range: 0 <= q <= 1. - interpolation : {'linear', 'lower', 'higher', 'midpoint', 'nearest'} - This optional parameter specifies the interpolation method to use, - when the desired quantile lies between two data points `i` and `j`: - - * linear: `i + (j - i) * fraction`, where `fraction` is the - fractional part of the index surrounded by `i` and `j`. - * lower: `i`. - * higher: `j`. - * nearest: `i` or `j` whichever is nearest. - * midpoint: (`i` + `j`) / 2. - - Returns - ------- - float or Series - If ``q`` is an array, a Series will be returned where the - index is ``q`` and the values are the quantiles, otherwise - a float will be returned. - - See Also - -------- - core.window.Rolling.quantile : Calculate the rolling quantile. - numpy.percentile : Returns the q-th percentile(s) of the array elements. - - Examples - -------- - >>> s = pd.Series([1, 2, 3, 4]) - >>> s.quantile(.5) - 2.5 - >>> s.quantile([.25, .5, .75]) - 0.25 1.75 - 0.50 2.50 - 0.75 3.25 - dtype: float64 - """ - validate_percentile(q) - - # We dispatch to DataFrame so that core.internals only has to worry - # about 2D cases. - df = self.to_frame() - - result = df.quantile(q=q, interpolation=interpolation, numeric_only=False) - if result.ndim == 2: - result = result.iloc[:, 0] - - if is_list_like(q): - result.name = self.name - idx = Index(q, dtype=np.float64) - return self._constructor(result, index=idx, name=self.name) - else: - # scalar - return result.iloc[0] - - def corr( - self, - other: Series, - method: CorrelationMethod = "pearson", - min_periods: int | None = None, - ) -> float: - """ - Compute correlation with `other` Series, excluding missing values. - - The two `Series` objects are not required to be the same length and will be - aligned internally before the correlation function is applied. - - Parameters - ---------- - other : Series - Series with which to compute the correlation. - method : {'pearson', 'kendall', 'spearman'} or callable - Method used to compute correlation: - - - pearson : Standard correlation coefficient - - kendall : Kendall Tau correlation coefficient - - spearman : Spearman rank correlation - - callable: Callable with input two 1d ndarrays and returning a float. - - .. warning:: - Note that the returned matrix from corr will have 1 along the - diagonals and will be symmetric regardless of the callable's - behavior. - min_periods : int, optional - Minimum number of observations needed to have a valid result. - - Returns - ------- - float - Correlation with other. - - See Also - -------- - DataFrame.corr : Compute pairwise correlation between columns. - DataFrame.corrwith : Compute pairwise correlation with another - DataFrame or Series. - - Notes - ----- - Pearson, Kendall and Spearman correlation are currently computed using pairwise complete observations. - - * `Pearson correlation coefficient `_ - * `Kendall rank correlation coefficient `_ - * `Spearman's rank correlation coefficient `_ - - Automatic data alignment: as with all pandas operations, automatic data alignment is performed for this method. - ``corr()`` automatically considers values with matching indices. - - Examples - -------- - >>> def histogram_intersection(a, b): - ... v = np.minimum(a, b).sum().round(decimals=1) - ... return v - >>> s1 = pd.Series([.2, .0, .6, .2]) - >>> s2 = pd.Series([.3, .6, .0, .1]) - >>> s1.corr(s2, method=histogram_intersection) - 0.3 - - Pandas auto-aligns the values with matching indices - - >>> s1 = pd.Series([1, 2, 3], index=[0, 1, 2]) - >>> s2 = pd.Series([1, 2, 3], index=[2, 1, 0]) - >>> s1.corr(s2) - -1.0 - """ # noqa: E501 - this, other = self.align(other, join="inner", copy=False) - if len(this) == 0: - return np.nan - - this_values = this.to_numpy(dtype=float, na_value=np.nan, copy=False) - other_values = other.to_numpy(dtype=float, na_value=np.nan, copy=False) - - if method in ["pearson", "spearman", "kendall"] or callable(method): - return nanops.nancorr( - this_values, other_values, method=method, min_periods=min_periods - ) - - raise ValueError( - "method must be either 'pearson', " - "'spearman', 'kendall', or a callable, " - f"'{method}' was supplied" - ) - - def cov( - self, - other: Series, - min_periods: int | None = None, - ddof: int | None = 1, - ) -> float: - """ - Compute covariance with Series, excluding missing values. - - The two `Series` objects are not required to be the same length and - will be aligned internally before the covariance is calculated. - - Parameters - ---------- - other : Series - Series with which to compute the covariance. - min_periods : int, optional - Minimum number of observations needed to have a valid result. - ddof : int, default 1 - Delta degrees of freedom. The divisor used in calculations - is ``N - ddof``, where ``N`` represents the number of elements. - - Returns - ------- - float - Covariance between Series and other normalized by N-1 - (unbiased estimator). - - See Also - -------- - DataFrame.cov : Compute pairwise covariance of columns. - - Examples - -------- - >>> s1 = pd.Series([0.90010907, 0.13484424, 0.62036035]) - >>> s2 = pd.Series([0.12528585, 0.26962463, 0.51111198]) - >>> s1.cov(s2) - -0.01685762652715874 - """ - this, other = self.align(other, join="inner", copy=False) - if len(this) == 0: - return np.nan - this_values = this.to_numpy(dtype=float, na_value=np.nan, copy=False) - other_values = other.to_numpy(dtype=float, na_value=np.nan, copy=False) - return nanops.nancov( - this_values, other_values, min_periods=min_periods, ddof=ddof - ) - - @doc( - klass="Series", - extra_params="", - other_klass="DataFrame", - examples=dedent( - """ - Difference with previous row - - >>> s = pd.Series([1, 1, 2, 3, 5, 8]) - >>> s.diff() - 0 NaN - 1 0.0 - 2 1.0 - 3 1.0 - 4 2.0 - 5 3.0 - dtype: float64 - - Difference with 3rd previous row - - >>> s.diff(periods=3) - 0 NaN - 1 NaN - 2 NaN - 3 2.0 - 4 4.0 - 5 6.0 - dtype: float64 - - Difference with following row - - >>> s.diff(periods=-1) - 0 0.0 - 1 -1.0 - 2 -1.0 - 3 -2.0 - 4 -3.0 - 5 NaN - dtype: float64 - - Overflow in input dtype - - >>> s = pd.Series([1, 0], dtype=np.uint8) - >>> s.diff() - 0 NaN - 1 255.0 - dtype: float64""" - ), - ) - def diff(self, periods: int = 1) -> Series: - """ - First discrete difference of element. - - Calculates the difference of a {klass} element compared with another - element in the {klass} (default is element in previous row). - - Parameters - ---------- - periods : int, default 1 - Periods to shift for calculating difference, accepts negative - values. - {extra_params} - Returns - ------- - {klass} - First differences of the Series. - - See Also - -------- - {klass}.pct_change: Percent change over given number of periods. - {klass}.shift: Shift index by desired number of periods with an - optional time freq. - {other_klass}.diff: First discrete difference of object. - - Notes - ----- - For boolean dtypes, this uses :meth:`operator.xor` rather than - :meth:`operator.sub`. - The result is calculated according to current dtype in {klass}, - however dtype of the result is always float64. - - Examples - -------- - {examples} - """ - result = algorithms.diff(self._values, periods) - return self._constructor(result, index=self.index, copy=False).__finalize__( - self, method="diff" - ) - - def autocorr(self, lag: int = 1) -> float: - """ - Compute the lag-N autocorrelation. - - This method computes the Pearson correlation between - the Series and its shifted self. - - Parameters - ---------- - lag : int, default 1 - Number of lags to apply before performing autocorrelation. - - Returns - ------- - float - The Pearson correlation between self and self.shift(lag). - - See Also - -------- - Series.corr : Compute the correlation between two Series. - Series.shift : Shift index by desired number of periods. - DataFrame.corr : Compute pairwise correlation of columns. - DataFrame.corrwith : Compute pairwise correlation between rows or - columns of two DataFrame objects. - - Notes - ----- - If the Pearson correlation is not well defined return 'NaN'. - - Examples - -------- - >>> s = pd.Series([0.25, 0.5, 0.2, -0.05]) - >>> s.autocorr() # doctest: +ELLIPSIS - 0.10355... - >>> s.autocorr(lag=2) # doctest: +ELLIPSIS - -0.99999... - - If the Pearson correlation is not well defined, then 'NaN' is returned. - - >>> s = pd.Series([1, 0, 0, 0]) - >>> s.autocorr() - nan - """ - return self.corr(cast(Series, self.shift(lag))) - - def dot(self, other: AnyArrayLike) -> Series | np.ndarray: - """ - Compute the dot product between the Series and the columns of other. - - This method computes the dot product between the Series and another - one, or the Series and each columns of a DataFrame, or the Series and - each columns of an array. - - It can also be called using `self @ other`. - - Parameters - ---------- - other : Series, DataFrame or array-like - The other object to compute the dot product with its columns. - - Returns - ------- - scalar, Series or numpy.ndarray - Return the dot product of the Series and other if other is a - Series, the Series of the dot product of Series and each rows of - other if other is a DataFrame or a numpy.ndarray between the Series - and each columns of the numpy array. - - See Also - -------- - DataFrame.dot: Compute the matrix product with the DataFrame. - Series.mul: Multiplication of series and other, element-wise. - - Notes - ----- - The Series and other has to share the same index if other is a Series - or a DataFrame. - - Examples - -------- - >>> s = pd.Series([0, 1, 2, 3]) - >>> other = pd.Series([-1, 2, -3, 4]) - >>> s.dot(other) - 8 - >>> s @ other - 8 - >>> df = pd.DataFrame([[0, 1], [-2, 3], [4, -5], [6, 7]]) - >>> s.dot(df) - 0 24 - 1 14 - dtype: int64 - >>> arr = np.array([[0, 1], [-2, 3], [4, -5], [6, 7]]) - >>> s.dot(arr) - array([24, 14]) - """ - if isinstance(other, (Series, ABCDataFrame)): - common = self.index.union(other.index) - if len(common) > len(self.index) or len(common) > len(other.index): - raise ValueError("matrices are not aligned") - - left = self.reindex(index=common, copy=False) - right = other.reindex(index=common, copy=False) - lvals = left.values - rvals = right.values - else: - lvals = self.values - rvals = np.asarray(other) - if lvals.shape[0] != rvals.shape[0]: - raise Exception( - f"Dot product shape mismatch, {lvals.shape} vs {rvals.shape}" - ) - - if isinstance(other, ABCDataFrame): - return self._constructor( - np.dot(lvals, rvals), index=other.columns, copy=False - ).__finalize__(self, method="dot") - elif isinstance(other, Series): - return np.dot(lvals, rvals) - elif isinstance(rvals, np.ndarray): - return np.dot(lvals, rvals) - else: # pragma: no cover - raise TypeError(f"unsupported type: {type(other)}") - - def __matmul__(self, other): - """ - Matrix multiplication using binary `@` operator. - """ - return self.dot(other) - - def __rmatmul__(self, other): - """ - Matrix multiplication using binary `@` operator. - """ - return self.dot(np.transpose(other)) - - @doc(base.IndexOpsMixin.searchsorted, klass="Series") - # Signature of "searchsorted" incompatible with supertype "IndexOpsMixin" - def searchsorted( # type: ignore[override] - self, - value: NumpyValueArrayLike | ExtensionArray, - side: Literal["left", "right"] = "left", - sorter: NumpySorter | None = None, - ) -> npt.NDArray[np.intp] | np.intp: - return base.IndexOpsMixin.searchsorted(self, value, side=side, sorter=sorter) - - # ------------------------------------------------------------------- - # Combination - - def _append( - self, to_append, ignore_index: bool = False, verify_integrity: bool = False - ): - from pandas.core.reshape.concat import concat - - if isinstance(to_append, (list, tuple)): - to_concat = [self] - to_concat.extend(to_append) - else: - to_concat = [self, to_append] - if any(isinstance(x, (ABCDataFrame,)) for x in to_concat[1:]): - msg = "to_append should be a Series or list/tuple of Series, got DataFrame" - raise TypeError(msg) - return concat( - to_concat, ignore_index=ignore_index, verify_integrity=verify_integrity - ) - - @doc( - _shared_docs["compare"], - dedent( - """ - Returns - ------- - Series or DataFrame - If axis is 0 or 'index' the result will be a Series. - The resulting index will be a MultiIndex with 'self' and 'other' - stacked alternately at the inner level. - - If axis is 1 or 'columns' the result will be a DataFrame. - It will have two columns namely 'self' and 'other'. - - See Also - -------- - DataFrame.compare : Compare with another DataFrame and show differences. - - Notes - ----- - Matching NaNs will not appear as a difference. - - Examples - -------- - >>> s1 = pd.Series(["a", "b", "c", "d", "e"]) - >>> s2 = pd.Series(["a", "a", "c", "b", "e"]) - - Align the differences on columns - - >>> s1.compare(s2) - self other - 1 b a - 3 d b - - Stack the differences on indices - - >>> s1.compare(s2, align_axis=0) - 1 self b - other a - 3 self d - other b - dtype: object - - Keep all original rows - - >>> s1.compare(s2, keep_shape=True) - self other - 0 NaN NaN - 1 b a - 2 NaN NaN - 3 d b - 4 NaN NaN - - Keep all original rows and also all original values - - >>> s1.compare(s2, keep_shape=True, keep_equal=True) - self other - 0 a a - 1 b a - 2 c c - 3 d b - 4 e e - """ - ), - klass=_shared_doc_kwargs["klass"], - ) - def compare( - self, - other: Series, - align_axis: Axis = 1, - keep_shape: bool = False, - keep_equal: bool = False, - result_names: Suffixes = ("self", "other"), - ) -> DataFrame | Series: - return super().compare( - other=other, - align_axis=align_axis, - keep_shape=keep_shape, - keep_equal=keep_equal, - result_names=result_names, - ) - - def combine( - self, - other: Series | Hashable, - func: Callable[[Hashable, Hashable], Hashable], - fill_value: Hashable | None = None, - ) -> Series: - """ - Combine the Series with a Series or scalar according to `func`. - - Combine the Series and `other` using `func` to perform elementwise - selection for combined Series. - `fill_value` is assumed when value is missing at some index - from one of the two objects being combined. - - Parameters - ---------- - other : Series or scalar - The value(s) to be combined with the `Series`. - func : function - Function that takes two scalars as inputs and returns an element. - fill_value : scalar, optional - The value to assume when an index is missing from - one Series or the other. The default specifies to use the - appropriate NaN value for the underlying dtype of the Series. - - Returns - ------- - Series - The result of combining the Series with the other object. - - See Also - -------- - Series.combine_first : Combine Series values, choosing the calling - Series' values first. - - Examples - -------- - Consider 2 Datasets ``s1`` and ``s2`` containing - highest clocked speeds of different birds. - - >>> s1 = pd.Series({'falcon': 330.0, 'eagle': 160.0}) - >>> s1 - falcon 330.0 - eagle 160.0 - dtype: float64 - >>> s2 = pd.Series({'falcon': 345.0, 'eagle': 200.0, 'duck': 30.0}) - >>> s2 - falcon 345.0 - eagle 200.0 - duck 30.0 - dtype: float64 - - Now, to combine the two datasets and view the highest speeds - of the birds across the two datasets - - >>> s1.combine(s2, max) - duck NaN - eagle 200.0 - falcon 345.0 - dtype: float64 - - In the previous example, the resulting value for duck is missing, - because the maximum of a NaN and a float is a NaN. - So, in the example, we set ``fill_value=0``, - so the maximum value returned will be the value from some dataset. - - >>> s1.combine(s2, max, fill_value=0) - duck 30.0 - eagle 200.0 - falcon 345.0 - dtype: float64 - """ - if fill_value is None: - fill_value = na_value_for_dtype(self.dtype, compat=False) - - if isinstance(other, Series): - # If other is a Series, result is based on union of Series, - # so do this element by element - new_index = self.index.union(other.index) - new_name = ops.get_op_result_name(self, other) - new_values = np.empty(len(new_index), dtype=object) - with np.errstate(all="ignore"): - for i, idx in enumerate(new_index): - lv = self.get(idx, fill_value) - rv = other.get(idx, fill_value) - new_values[i] = func(lv, rv) - else: - # Assume that other is a scalar, so apply the function for - # each element in the Series - new_index = self.index - new_values = np.empty(len(new_index), dtype=object) - with np.errstate(all="ignore"): - new_values[:] = [func(lv, other) for lv in self._values] - new_name = self.name - - # try_float=False is to match agg_series - npvalues = lib.maybe_convert_objects(new_values, try_float=False) - res_values = maybe_cast_pointwise_result(npvalues, self.dtype, same_dtype=False) - return self._constructor(res_values, index=new_index, name=new_name, copy=False) - - def combine_first(self, other) -> Series: - """ - Update null elements with value in the same location in 'other'. - - Combine two Series objects by filling null values in one Series with - non-null values from the other Series. Result index will be the union - of the two indexes. - - Parameters - ---------- - other : Series - The value(s) to be used for filling null values. - - Returns - ------- - Series - The result of combining the provided Series with the other object. - - See Also - -------- - Series.combine : Perform element-wise operation on two Series - using a given function. - - Examples - -------- - >>> s1 = pd.Series([1, np.nan]) - >>> s2 = pd.Series([3, 4, 5]) - >>> s1.combine_first(s2) - 0 1.0 - 1 4.0 - 2 5.0 - dtype: float64 - - Null values still persist if the location of that null value - does not exist in `other` - - >>> s1 = pd.Series({'falcon': np.nan, 'eagle': 160.0}) - >>> s2 = pd.Series({'eagle': 200.0, 'duck': 30.0}) - >>> s1.combine_first(s2) - duck 30.0 - eagle 160.0 - falcon NaN - dtype: float64 - """ - from pandas.core.reshape.concat import concat - - new_index = self.index.union(other.index) - - this = self - # identify the index subset to keep for each series - keep_other = other.index.difference(this.index[notna(this)]) - keep_this = this.index.difference(keep_other) - - this = this.reindex(keep_this, copy=False) - other = other.reindex(keep_other, copy=False) - - if this.dtype.kind == "M" and other.dtype.kind != "M": - other = to_datetime(other) - combined = concat([this, other]) - combined = combined.reindex(new_index, copy=False) - return combined.__finalize__(self, method="combine_first") - - def update(self, other: Series | Sequence | Mapping) -> None: - """ - Modify Series in place using values from passed Series. - - Uses non-NA values from passed Series to make updates. Aligns - on index. - - Parameters - ---------- - other : Series, or object coercible into Series - - Examples - -------- - >>> s = pd.Series([1, 2, 3]) - >>> s.update(pd.Series([4, 5, 6])) - >>> s - 0 4 - 1 5 - 2 6 - dtype: int64 - - >>> s = pd.Series(['a', 'b', 'c']) - >>> s.update(pd.Series(['d', 'e'], index=[0, 2])) - >>> s - 0 d - 1 b - 2 e - dtype: object - - >>> s = pd.Series([1, 2, 3]) - >>> s.update(pd.Series([4, 5, 6, 7, 8])) - >>> s - 0 4 - 1 5 - 2 6 - dtype: int64 - - If ``other`` contains NaNs the corresponding values are not updated - in the original Series. - - >>> s = pd.Series([1, 2, 3]) - >>> s.update(pd.Series([4, np.nan, 6])) - >>> s - 0 4 - 1 2 - 2 6 - dtype: int64 - - ``other`` can also be a non-Series object type - that is coercible into a Series - - >>> s = pd.Series([1, 2, 3]) - >>> s.update([4, np.nan, 6]) - >>> s - 0 4 - 1 2 - 2 6 - dtype: int64 - - >>> s = pd.Series([1, 2, 3]) - >>> s.update({1: 9}) - >>> s - 0 1 - 1 9 - 2 3 - dtype: int64 - """ - if not PYPY and using_copy_on_write(): - if sys.getrefcount(self) <= REF_COUNT: - warnings.warn( - _chained_assignment_method_msg, - ChainedAssignmentError, - stacklevel=2, - ) - - if not isinstance(other, Series): - other = Series(other) - - other = other.reindex_like(self) - mask = notna(other) - - self._mgr = self._mgr.putmask(mask=mask, new=other) - self._maybe_update_cacher() - - # ---------------------------------------------------------------------- - # Reindexing, sorting - - @overload - def sort_values( - self, - *, - axis: Axis = ..., - ascending: bool | Sequence[bool] = ..., - inplace: Literal[False] = ..., - kind: SortKind = ..., - na_position: NaPosition = ..., - ignore_index: bool = ..., - key: ValueKeyFunc = ..., - ) -> Series: - ... - - @overload - def sort_values( - self, - *, - axis: Axis = ..., - ascending: bool | Sequence[bool] = ..., - inplace: Literal[True], - kind: SortKind = ..., - na_position: NaPosition = ..., - ignore_index: bool = ..., - key: ValueKeyFunc = ..., - ) -> None: - ... - - @overload - def sort_values( - self, - *, - axis: Axis = ..., - ascending: bool | Sequence[bool] = ..., - inplace: bool = ..., - kind: SortKind = ..., - na_position: NaPosition = ..., - ignore_index: bool = ..., - key: ValueKeyFunc = ..., - ) -> Series | None: - ... - - def sort_values( - self, - *, - axis: Axis = 0, - ascending: bool | Sequence[bool] = True, - inplace: bool = False, - kind: SortKind = "quicksort", - na_position: NaPosition = "last", - ignore_index: bool = False, - key: ValueKeyFunc | None = None, - ) -> Series | None: - """ - Sort by the values. - - Sort a Series in ascending or descending order by some - criterion. - - Parameters - ---------- - axis : {0 or 'index'} - Unused. Parameter needed for compatibility with DataFrame. - ascending : bool or list of bools, default True - If True, sort values in ascending order, otherwise descending. - inplace : bool, default False - If True, perform operation in-place. - kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, default 'quicksort' - Choice of sorting algorithm. See also :func:`numpy.sort` for more - information. 'mergesort' and 'stable' are the only stable algorithms. - na_position : {'first' or 'last'}, default 'last' - Argument 'first' puts NaNs at the beginning, 'last' puts NaNs at - the end. - ignore_index : bool, default False - If True, the resulting axis will be labeled 0, 1, …, n - 1. - key : callable, optional - If not None, apply the key function to the series values - before sorting. This is similar to the `key` argument in the - builtin :meth:`sorted` function, with the notable difference that - this `key` function should be *vectorized*. It should expect a - ``Series`` and return an array-like. - - Returns - ------- - Series or None - Series ordered by values or None if ``inplace=True``. - - See Also - -------- - Series.sort_index : Sort by the Series indices. - DataFrame.sort_values : Sort DataFrame by the values along either axis. - DataFrame.sort_index : Sort DataFrame by indices. - - Examples - -------- - >>> s = pd.Series([np.nan, 1, 3, 10, 5]) - >>> s - 0 NaN - 1 1.0 - 2 3.0 - 3 10.0 - 4 5.0 - dtype: float64 - - Sort values ascending order (default behaviour) - - >>> s.sort_values(ascending=True) - 1 1.0 - 2 3.0 - 4 5.0 - 3 10.0 - 0 NaN - dtype: float64 - - Sort values descending order - - >>> s.sort_values(ascending=False) - 3 10.0 - 4 5.0 - 2 3.0 - 1 1.0 - 0 NaN - dtype: float64 - - Sort values putting NAs first - - >>> s.sort_values(na_position='first') - 0 NaN - 1 1.0 - 2 3.0 - 4 5.0 - 3 10.0 - dtype: float64 - - Sort a series of strings - - >>> s = pd.Series(['z', 'b', 'd', 'a', 'c']) - >>> s - 0 z - 1 b - 2 d - 3 a - 4 c - dtype: object - - >>> s.sort_values() - 3 a - 1 b - 4 c - 2 d - 0 z - dtype: object - - Sort using a key function. Your `key` function will be - given the ``Series`` of values and should return an array-like. - - >>> s = pd.Series(['a', 'B', 'c', 'D', 'e']) - >>> s.sort_values() - 1 B - 3 D - 0 a - 2 c - 4 e - dtype: object - >>> s.sort_values(key=lambda x: x.str.lower()) - 0 a - 1 B - 2 c - 3 D - 4 e - dtype: object - - NumPy ufuncs work well here. For example, we can - sort by the ``sin`` of the value - - >>> s = pd.Series([-4, -2, 0, 2, 4]) - >>> s.sort_values(key=np.sin) - 1 -2 - 4 4 - 2 0 - 0 -4 - 3 2 - dtype: int64 - - More complicated user-defined functions can be used, - as long as they expect a Series and return an array-like - - >>> s.sort_values(key=lambda x: (np.tan(x.cumsum()))) - 0 -4 - 3 2 - 4 4 - 1 -2 - 2 0 - dtype: int64 - """ - inplace = validate_bool_kwarg(inplace, "inplace") - # Validate the axis parameter - self._get_axis_number(axis) - - # GH 5856/5853 - if inplace and self._is_cached: - raise ValueError( - "This Series is a view of some other array, to " - "sort in-place you must create a copy" - ) - - if is_list_like(ascending): - ascending = cast(Sequence[bool], ascending) - if len(ascending) != 1: - raise ValueError( - f"Length of ascending ({len(ascending)}) must be 1 for Series" - ) - ascending = ascending[0] - - ascending = validate_ascending(ascending) - - if na_position not in ["first", "last"]: - raise ValueError(f"invalid na_position: {na_position}") - - # GH 35922. Make sorting stable by leveraging nargsort - if key: - values_to_sort = cast(Series, ensure_key_mapped(self, key))._values - else: - values_to_sort = self._values - sorted_index = nargsort(values_to_sort, kind, bool(ascending), na_position) - - if is_range_indexer(sorted_index, len(sorted_index)): - if inplace: - return self._update_inplace(self) - return self.copy(deep=None) - - result = self._constructor( - self._values[sorted_index], index=self.index[sorted_index], copy=False - ) - - if ignore_index: - result.index = default_index(len(sorted_index)) - - if not inplace: - return result.__finalize__(self, method="sort_values") - self._update_inplace(result) - return None - - @overload - def sort_index( - self, - *, - axis: Axis = ..., - level: IndexLabel = ..., - ascending: bool | Sequence[bool] = ..., - inplace: Literal[True], - kind: SortKind = ..., - na_position: NaPosition = ..., - sort_remaining: bool = ..., - ignore_index: bool = ..., - key: IndexKeyFunc = ..., - ) -> None: - ... - - @overload - def sort_index( - self, - *, - axis: Axis = ..., - level: IndexLabel = ..., - ascending: bool | Sequence[bool] = ..., - inplace: Literal[False] = ..., - kind: SortKind = ..., - na_position: NaPosition = ..., - sort_remaining: bool = ..., - ignore_index: bool = ..., - key: IndexKeyFunc = ..., - ) -> Series: - ... - - @overload - def sort_index( - self, - *, - axis: Axis = ..., - level: IndexLabel = ..., - ascending: bool | Sequence[bool] = ..., - inplace: bool = ..., - kind: SortKind = ..., - na_position: NaPosition = ..., - sort_remaining: bool = ..., - ignore_index: bool = ..., - key: IndexKeyFunc = ..., - ) -> Series | None: - ... - - def sort_index( - self, - *, - axis: Axis = 0, - level: IndexLabel | None = None, - ascending: bool | Sequence[bool] = True, - inplace: bool = False, - kind: SortKind = "quicksort", - na_position: NaPosition = "last", - sort_remaining: bool = True, - ignore_index: bool = False, - key: IndexKeyFunc | None = None, - ) -> Series | None: - """ - Sort Series by index labels. - - Returns a new Series sorted by label if `inplace` argument is - ``False``, otherwise updates the original series and returns None. - - Parameters - ---------- - axis : {0 or 'index'} - Unused. Parameter needed for compatibility with DataFrame. - level : int, optional - If not None, sort on values in specified index level(s). - ascending : bool or list-like of bools, default True - Sort ascending vs. descending. When the index is a MultiIndex the - sort direction can be controlled for each level individually. - inplace : bool, default False - If True, perform operation in-place. - kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, default 'quicksort' - Choice of sorting algorithm. See also :func:`numpy.sort` for more - information. 'mergesort' and 'stable' are the only stable algorithms. For - DataFrames, this option is only applied when sorting on a single - column or label. - na_position : {'first', 'last'}, default 'last' - If 'first' puts NaNs at the beginning, 'last' puts NaNs at the end. - Not implemented for MultiIndex. - sort_remaining : bool, default True - If True and sorting by level and index is multilevel, sort by other - levels too (in order) after sorting by specified level. - ignore_index : bool, default False - If True, the resulting axis will be labeled 0, 1, …, n - 1. - key : callable, optional - If not None, apply the key function to the index values - before sorting. This is similar to the `key` argument in the - builtin :meth:`sorted` function, with the notable difference that - this `key` function should be *vectorized*. It should expect an - ``Index`` and return an ``Index`` of the same shape. - - Returns - ------- - Series or None - The original Series sorted by the labels or None if ``inplace=True``. - - See Also - -------- - DataFrame.sort_index: Sort DataFrame by the index. - DataFrame.sort_values: Sort DataFrame by the value. - Series.sort_values : Sort Series by the value. - - Examples - -------- - >>> s = pd.Series(['a', 'b', 'c', 'd'], index=[3, 2, 1, 4]) - >>> s.sort_index() - 1 c - 2 b - 3 a - 4 d - dtype: object - - Sort Descending - - >>> s.sort_index(ascending=False) - 4 d - 3 a - 2 b - 1 c - dtype: object - - By default NaNs are put at the end, but use `na_position` to place - them at the beginning - - >>> s = pd.Series(['a', 'b', 'c', 'd'], index=[3, 2, 1, np.nan]) - >>> s.sort_index(na_position='first') - NaN d - 1.0 c - 2.0 b - 3.0 a - dtype: object - - Specify index level to sort - - >>> arrays = [np.array(['qux', 'qux', 'foo', 'foo', - ... 'baz', 'baz', 'bar', 'bar']), - ... np.array(['two', 'one', 'two', 'one', - ... 'two', 'one', 'two', 'one'])] - >>> s = pd.Series([1, 2, 3, 4, 5, 6, 7, 8], index=arrays) - >>> s.sort_index(level=1) - bar one 8 - baz one 6 - foo one 4 - qux one 2 - bar two 7 - baz two 5 - foo two 3 - qux two 1 - dtype: int64 - - Does not sort by remaining levels when sorting by levels - - >>> s.sort_index(level=1, sort_remaining=False) - qux one 2 - foo one 4 - baz one 6 - bar one 8 - qux two 1 - foo two 3 - baz two 5 - bar two 7 - dtype: int64 - - Apply a key function before sorting - - >>> s = pd.Series([1, 2, 3, 4], index=['A', 'b', 'C', 'd']) - >>> s.sort_index(key=lambda x : x.str.lower()) - A 1 - b 2 - C 3 - d 4 - dtype: int64 - """ - - return super().sort_index( - axis=axis, - level=level, - ascending=ascending, - inplace=inplace, - kind=kind, - na_position=na_position, - sort_remaining=sort_remaining, - ignore_index=ignore_index, - key=key, - ) - - def argsort( - self, - axis: Axis = 0, - kind: SortKind = "quicksort", - order: None = None, - ) -> Series: - """ - Return the integer indices that would sort the Series values. - - Override ndarray.argsort. Argsorts the value, omitting NA/null values, - and places the result in the same locations as the non-NA values. - - Parameters - ---------- - axis : {0 or 'index'} - Unused. Parameter needed for compatibility with DataFrame. - kind : {'mergesort', 'quicksort', 'heapsort', 'stable'}, default 'quicksort' - Choice of sorting algorithm. See :func:`numpy.sort` for more - information. 'mergesort' and 'stable' are the only stable algorithms. - order : None - Has no effect but is accepted for compatibility with numpy. - - Returns - ------- - Series[np.intp] - Positions of values within the sort order with -1 indicating - nan values. - - See Also - -------- - numpy.ndarray.argsort : Returns the indices that would sort this array. - - Examples - -------- - >>> s = pd.Series([3, 2, 1]) - >>> s.argsort() - 0 2 - 1 1 - 2 0 - dtype: int64 - """ - if axis != -1: - # GH#54257 We allow -1 here so that np.argsort(series) works - self._get_axis_number(axis) - - values = self._values - mask = isna(values) - - if mask.any(): - warnings.warn( - "The behavior of Series.argsort in the presence of NA values is " - "deprecated. In a future version, NA values will be ordered " - "last instead of set to -1.", - FutureWarning, - stacklevel=find_stack_level(), - ) - result = np.full(len(self), -1, dtype=np.intp) - notmask = ~mask - result[notmask] = np.argsort(values[notmask], kind=kind) - else: - result = np.argsort(values, kind=kind) - - res = self._constructor( - result, index=self.index, name=self.name, dtype=np.intp, copy=False - ) - return res.__finalize__(self, method="argsort") - - def nlargest( - self, n: int = 5, keep: Literal["first", "last", "all"] = "first" - ) -> Series: - """ - Return the largest `n` elements. - - Parameters - ---------- - n : int, default 5 - Return this many descending sorted values. - keep : {'first', 'last', 'all'}, default 'first' - When there are duplicate values that cannot all fit in a - Series of `n` elements: - - - ``first`` : return the first `n` occurrences in order - of appearance. - - ``last`` : return the last `n` occurrences in reverse - order of appearance. - - ``all`` : keep all occurrences. This can result in a Series of - size larger than `n`. - - Returns - ------- - Series - The `n` largest values in the Series, sorted in decreasing order. - - See Also - -------- - Series.nsmallest: Get the `n` smallest elements. - Series.sort_values: Sort Series by values. - Series.head: Return the first `n` rows. - - Notes - ----- - Faster than ``.sort_values(ascending=False).head(n)`` for small `n` - relative to the size of the ``Series`` object. - - Examples - -------- - >>> countries_population = {"Italy": 59000000, "France": 65000000, - ... "Malta": 434000, "Maldives": 434000, - ... "Brunei": 434000, "Iceland": 337000, - ... "Nauru": 11300, "Tuvalu": 11300, - ... "Anguilla": 11300, "Montserrat": 5200} - >>> s = pd.Series(countries_population) - >>> s - Italy 59000000 - France 65000000 - Malta 434000 - Maldives 434000 - Brunei 434000 - Iceland 337000 - Nauru 11300 - Tuvalu 11300 - Anguilla 11300 - Montserrat 5200 - dtype: int64 - - The `n` largest elements where ``n=5`` by default. - - >>> s.nlargest() - France 65000000 - Italy 59000000 - Malta 434000 - Maldives 434000 - Brunei 434000 - dtype: int64 - - The `n` largest elements where ``n=3``. Default `keep` value is 'first' - so Malta will be kept. - - >>> s.nlargest(3) - France 65000000 - Italy 59000000 - Malta 434000 - dtype: int64 - - The `n` largest elements where ``n=3`` and keeping the last duplicates. - Brunei will be kept since it is the last with value 434000 based on - the index order. - - >>> s.nlargest(3, keep='last') - France 65000000 - Italy 59000000 - Brunei 434000 - dtype: int64 - - The `n` largest elements where ``n=3`` with all duplicates kept. Note - that the returned Series has five elements due to the three duplicates. - - >>> s.nlargest(3, keep='all') - France 65000000 - Italy 59000000 - Malta 434000 - Maldives 434000 - Brunei 434000 - dtype: int64 - """ - return selectn.SelectNSeries(self, n=n, keep=keep).nlargest() - - def nsmallest( - self, n: int = 5, keep: Literal["first", "last", "all"] = "first" - ) -> Series: - """ - Return the smallest `n` elements. - - Parameters - ---------- - n : int, default 5 - Return this many ascending sorted values. - keep : {'first', 'last', 'all'}, default 'first' - When there are duplicate values that cannot all fit in a - Series of `n` elements: - - - ``first`` : return the first `n` occurrences in order - of appearance. - - ``last`` : return the last `n` occurrences in reverse - order of appearance. - - ``all`` : keep all occurrences. This can result in a Series of - size larger than `n`. - - Returns - ------- - Series - The `n` smallest values in the Series, sorted in increasing order. - - See Also - -------- - Series.nlargest: Get the `n` largest elements. - Series.sort_values: Sort Series by values. - Series.head: Return the first `n` rows. - - Notes - ----- - Faster than ``.sort_values().head(n)`` for small `n` relative to - the size of the ``Series`` object. - - Examples - -------- - >>> countries_population = {"Italy": 59000000, "France": 65000000, - ... "Brunei": 434000, "Malta": 434000, - ... "Maldives": 434000, "Iceland": 337000, - ... "Nauru": 11300, "Tuvalu": 11300, - ... "Anguilla": 11300, "Montserrat": 5200} - >>> s = pd.Series(countries_population) - >>> s - Italy 59000000 - France 65000000 - Brunei 434000 - Malta 434000 - Maldives 434000 - Iceland 337000 - Nauru 11300 - Tuvalu 11300 - Anguilla 11300 - Montserrat 5200 - dtype: int64 - - The `n` smallest elements where ``n=5`` by default. - - >>> s.nsmallest() - Montserrat 5200 - Nauru 11300 - Tuvalu 11300 - Anguilla 11300 - Iceland 337000 - dtype: int64 - - The `n` smallest elements where ``n=3``. Default `keep` value is - 'first' so Nauru and Tuvalu will be kept. - - >>> s.nsmallest(3) - Montserrat 5200 - Nauru 11300 - Tuvalu 11300 - dtype: int64 - - The `n` smallest elements where ``n=3`` and keeping the last - duplicates. Anguilla and Tuvalu will be kept since they are the last - with value 11300 based on the index order. - - >>> s.nsmallest(3, keep='last') - Montserrat 5200 - Anguilla 11300 - Tuvalu 11300 - dtype: int64 - - The `n` smallest elements where ``n=3`` with all duplicates kept. Note - that the returned Series has four elements due to the three duplicates. - - >>> s.nsmallest(3, keep='all') - Montserrat 5200 - Nauru 11300 - Tuvalu 11300 - Anguilla 11300 - dtype: int64 - """ - return selectn.SelectNSeries(self, n=n, keep=keep).nsmallest() - - @doc( - klass=_shared_doc_kwargs["klass"], - extra_params=dedent( - """copy : bool, default True - Whether to copy underlying data.""" - ), - examples=dedent( - """\ - Examples - -------- - >>> s = pd.Series( - ... ["A", "B", "A", "C"], - ... index=[ - ... ["Final exam", "Final exam", "Coursework", "Coursework"], - ... ["History", "Geography", "History", "Geography"], - ... ["January", "February", "March", "April"], - ... ], - ... ) - >>> s - Final exam History January A - Geography February B - Coursework History March A - Geography April C - dtype: object - - In the following example, we will swap the levels of the indices. - Here, we will swap the levels column-wise, but levels can be swapped row-wise - in a similar manner. Note that column-wise is the default behaviour. - By not supplying any arguments for i and j, we swap the last and second to - last indices. - - >>> s.swaplevel() - Final exam January History A - February Geography B - Coursework March History A - April Geography C - dtype: object - - By supplying one argument, we can choose which index to swap the last - index with. We can for example swap the first index with the last one as - follows. - - >>> s.swaplevel(0) - January History Final exam A - February Geography Final exam B - March History Coursework A - April Geography Coursework C - dtype: object - - We can also define explicitly which indices we want to swap by supplying values - for both i and j. Here, we for example swap the first and second indices. - - >>> s.swaplevel(0, 1) - History Final exam January A - Geography Final exam February B - History Coursework March A - Geography Coursework April C - dtype: object""" - ), - ) - def swaplevel( - self, i: Level = -2, j: Level = -1, copy: bool | None = None - ) -> Series: - """ - Swap levels i and j in a :class:`MultiIndex`. - - Default is to swap the two innermost levels of the index. - - Parameters - ---------- - i, j : int or str - Levels of the indices to be swapped. Can pass level name as string. - {extra_params} - - Returns - ------- - {klass} - {klass} with levels swapped in MultiIndex. - - {examples} - """ - assert isinstance(self.index, MultiIndex) - result = self.copy(deep=copy and not using_copy_on_write()) - result.index = self.index.swaplevel(i, j) - return result - - def reorder_levels(self, order: Sequence[Level]) -> Series: - """ - Rearrange index levels using input order. - - May not drop or duplicate levels. - - Parameters - ---------- - order : list of int representing new level order - Reference level by number or key. - - Returns - ------- - type of caller (new object) - - Examples - -------- - >>> arrays = [np.array(["dog", "dog", "cat", "cat", "bird", "bird"]), - ... np.array(["white", "black", "white", "black", "white", "black"])] - >>> s = pd.Series([1, 2, 3, 3, 5, 2], index=arrays) - >>> s - dog white 1 - black 2 - cat white 3 - black 3 - bird white 5 - black 2 - dtype: int64 - >>> s.reorder_levels([1, 0]) - white dog 1 - black dog 2 - white cat 3 - black cat 3 - white bird 5 - black bird 2 - dtype: int64 - """ - if not isinstance(self.index, MultiIndex): # pragma: no cover - raise Exception("Can only reorder levels on a hierarchical axis.") - - result = self.copy(deep=None) - assert isinstance(result.index, MultiIndex) - result.index = result.index.reorder_levels(order) - return result - - def explode(self, ignore_index: bool = False) -> Series: - """ - Transform each element of a list-like to a row. - - Parameters - ---------- - ignore_index : bool, default False - If True, the resulting index will be labeled 0, 1, …, n - 1. - - Returns - ------- - Series - Exploded lists to rows; index will be duplicated for these rows. - - See Also - -------- - Series.str.split : Split string values on specified separator. - Series.unstack : Unstack, a.k.a. pivot, Series with MultiIndex - to produce DataFrame. - DataFrame.melt : Unpivot a DataFrame from wide format to long format. - DataFrame.explode : Explode a DataFrame from list-like - columns to long format. - - Notes - ----- - This routine will explode list-likes including lists, tuples, sets, - Series, and np.ndarray. The result dtype of the subset rows will - be object. Scalars will be returned unchanged, and empty list-likes will - result in a np.nan for that row. In addition, the ordering of elements in - the output will be non-deterministic when exploding sets. - - Reference :ref:`the user guide ` for more examples. - - Examples - -------- - >>> s = pd.Series([[1, 2, 3], 'foo', [], [3, 4]]) - >>> s - 0 [1, 2, 3] - 1 foo - 2 [] - 3 [3, 4] - dtype: object - - >>> s.explode() - 0 1 - 0 2 - 0 3 - 1 foo - 2 NaN - 3 3 - 3 4 - dtype: object - """ - if isinstance(self.dtype, ArrowDtype) and self.dtype.type == list: - values, counts = self._values._explode() - elif len(self) and is_object_dtype(self.dtype): - values, counts = reshape.explode(np.asarray(self._values)) - else: - result = self.copy() - return result.reset_index(drop=True) if ignore_index else result - - if ignore_index: - index = default_index(len(values)) - else: - index = self.index.repeat(counts) - - return self._constructor(values, index=index, name=self.name, copy=False) - - def unstack( - self, - level: IndexLabel = -1, - fill_value: Hashable | None = None, - sort: bool = True, - ) -> DataFrame: - """ - Unstack, also known as pivot, Series with MultiIndex to produce DataFrame. - - Parameters - ---------- - level : int, str, or list of these, default last level - Level(s) to unstack, can pass level name. - fill_value : scalar value, default None - Value to use when replacing NaN values. - sort : bool, default True - Sort the level(s) in the resulting MultiIndex columns. - - Returns - ------- - DataFrame - Unstacked Series. - - Notes - ----- - Reference :ref:`the user guide ` for more examples. - - Examples - -------- - >>> s = pd.Series([1, 2, 3, 4], - ... index=pd.MultiIndex.from_product([['one', 'two'], - ... ['a', 'b']])) - >>> s - one a 1 - b 2 - two a 3 - b 4 - dtype: int64 - - >>> s.unstack(level=-1) - a b - one 1 2 - two 3 4 - - >>> s.unstack(level=0) - one two - a 1 3 - b 2 4 - """ - from pandas.core.reshape.reshape import unstack - - return unstack(self, level, fill_value, sort) - - # ---------------------------------------------------------------------- - # function application - - def map( - self, - arg: Callable | Mapping | Series, - na_action: Literal["ignore"] | None = None, - ) -> Series: - """ - Map values of Series according to an input mapping or function. - - Used for substituting each value in a Series with another value, - that may be derived from a function, a ``dict`` or - a :class:`Series`. - - Parameters - ---------- - arg : function, collections.abc.Mapping subclass or Series - Mapping correspondence. - na_action : {None, 'ignore'}, default None - If 'ignore', propagate NaN values, without passing them to the - mapping correspondence. - - Returns - ------- - Series - Same index as caller. - - See Also - -------- - Series.apply : For applying more complex functions on a Series. - Series.replace: Replace values given in `to_replace` with `value`. - DataFrame.apply : Apply a function row-/column-wise. - DataFrame.map : Apply a function elementwise on a whole DataFrame. - - Notes - ----- - When ``arg`` is a dictionary, values in Series that are not in the - dictionary (as keys) are converted to ``NaN``. However, if the - dictionary is a ``dict`` subclass that defines ``__missing__`` (i.e. - provides a method for default values), then this default is used - rather than ``NaN``. - - Examples - -------- - >>> s = pd.Series(['cat', 'dog', np.nan, 'rabbit']) - >>> s - 0 cat - 1 dog - 2 NaN - 3 rabbit - dtype: object - - ``map`` accepts a ``dict`` or a ``Series``. Values that are not found - in the ``dict`` are converted to ``NaN``, unless the dict has a default - value (e.g. ``defaultdict``): - - >>> s.map({'cat': 'kitten', 'dog': 'puppy'}) - 0 kitten - 1 puppy - 2 NaN - 3 NaN - dtype: object - - It also accepts a function: - - >>> s.map('I am a {}'.format) - 0 I am a cat - 1 I am a dog - 2 I am a nan - 3 I am a rabbit - dtype: object - - To avoid applying the function to missing values (and keep them as - ``NaN``) ``na_action='ignore'`` can be used: - - >>> s.map('I am a {}'.format, na_action='ignore') - 0 I am a cat - 1 I am a dog - 2 NaN - 3 I am a rabbit - dtype: object - """ - new_values = self._map_values(arg, na_action=na_action) - return self._constructor(new_values, index=self.index, copy=False).__finalize__( - self, method="map" - ) - - def _gotitem(self, key, ndim, subset=None) -> Self: - """ - Sub-classes to define. Return a sliced object. - - Parameters - ---------- - key : string / list of selections - ndim : {1, 2} - Requested ndim of result. - subset : object, default None - Subset to act on. - """ - return self - - _agg_see_also_doc = dedent( - """ - See Also - -------- - Series.apply : Invoke function on a Series. - Series.transform : Transform function producing a Series with like indexes. - """ - ) - - _agg_examples_doc = dedent( - """ - Examples - -------- - >>> s = pd.Series([1, 2, 3, 4]) - >>> s - 0 1 - 1 2 - 2 3 - 3 4 - dtype: int64 - - >>> s.agg('min') - 1 - - >>> s.agg(['min', 'max']) - min 1 - max 4 - dtype: int64 - """ - ) - - @doc( - _shared_docs["aggregate"], - klass=_shared_doc_kwargs["klass"], - axis=_shared_doc_kwargs["axis"], - see_also=_agg_see_also_doc, - examples=_agg_examples_doc, - ) - def aggregate(self, func=None, axis: Axis = 0, *args, **kwargs): - # Validate the axis parameter - self._get_axis_number(axis) - - # if func is None, will switch to user-provided "named aggregation" kwargs - if func is None: - func = dict(kwargs.items()) - - op = SeriesApply(self, func, args=args, kwargs=kwargs) - result = op.agg() - return result - - agg = aggregate - - @doc( - _shared_docs["transform"], - klass=_shared_doc_kwargs["klass"], - axis=_shared_doc_kwargs["axis"], - ) - def transform( - self, func: AggFuncType, axis: Axis = 0, *args, **kwargs - ) -> DataFrame | Series: - # Validate axis argument - self._get_axis_number(axis) - ser = self.copy(deep=False) if using_copy_on_write() else self - result = SeriesApply(ser, func=func, args=args, kwargs=kwargs).transform() - return result - - def apply( - self, - func: AggFuncType, - convert_dtype: bool | lib.NoDefault = lib.no_default, - args: tuple[Any, ...] = (), - *, - by_row: Literal[False, "compat"] = "compat", - **kwargs, - ) -> DataFrame | Series: - """ - Invoke function on values of Series. - - Can be ufunc (a NumPy function that applies to the entire Series) - or a Python function that only works on single values. - - Parameters - ---------- - func : function - Python function or NumPy ufunc to apply. - convert_dtype : bool, default True - Try to find better dtype for elementwise function results. If - False, leave as dtype=object. Note that the dtype is always - preserved for some extension array dtypes, such as Categorical. - - .. deprecated:: 2.1.0 - ``convert_dtype`` has been deprecated. Do ``ser.astype(object).apply()`` - instead if you want ``convert_dtype=False``. - args : tuple - Positional arguments passed to func after the series value. - by_row : False or "compat", default "compat" - If ``"compat"`` and func is a callable, func will be passed each element of - the Series, like ``Series.map``. If func is a list or dict of - callables, will first try to translate each func into pandas methods. If - that doesn't work, will try call to apply again with ``by_row="compat"`` - and if that fails, will call apply again with ``by_row=False`` - (backward compatible). - If False, the func will be passed the whole Series at once. - - ``by_row`` has no effect when ``func`` is a string. - - .. versionadded:: 2.1.0 - **kwargs - Additional keyword arguments passed to func. - - Returns - ------- - Series or DataFrame - If func returns a Series object the result will be a DataFrame. - - See Also - -------- - Series.map: For element-wise operations. - Series.agg: Only perform aggregating type operations. - Series.transform: Only perform transforming type operations. - - Notes - ----- - Functions that mutate the passed object can produce unexpected - behavior or errors and are not supported. See :ref:`gotchas.udf-mutation` - for more details. - - Examples - -------- - Create a series with typical summer temperatures for each city. - - >>> s = pd.Series([20, 21, 12], - ... index=['London', 'New York', 'Helsinki']) - >>> s - London 20 - New York 21 - Helsinki 12 - dtype: int64 - - Square the values by defining a function and passing it as an - argument to ``apply()``. - - >>> def square(x): - ... return x ** 2 - >>> s.apply(square) - London 400 - New York 441 - Helsinki 144 - dtype: int64 - - Square the values by passing an anonymous function as an - argument to ``apply()``. - - >>> s.apply(lambda x: x ** 2) - London 400 - New York 441 - Helsinki 144 - dtype: int64 - - Define a custom function that needs additional positional - arguments and pass these additional arguments using the - ``args`` keyword. - - >>> def subtract_custom_value(x, custom_value): - ... return x - custom_value - - >>> s.apply(subtract_custom_value, args=(5,)) - London 15 - New York 16 - Helsinki 7 - dtype: int64 - - Define a custom function that takes keyword arguments - and pass these arguments to ``apply``. - - >>> def add_custom_values(x, **kwargs): - ... for month in kwargs: - ... x += kwargs[month] - ... return x - - >>> s.apply(add_custom_values, june=30, july=20, august=25) - London 95 - New York 96 - Helsinki 87 - dtype: int64 - - Use a function from the Numpy library. - - >>> s.apply(np.log) - London 2.995732 - New York 3.044522 - Helsinki 2.484907 - dtype: float64 - """ - return SeriesApply( - self, - func, - convert_dtype=convert_dtype, - by_row=by_row, - args=args, - kwargs=kwargs, - ).apply() - - def _reindex_indexer( - self, - new_index: Index | None, - indexer: npt.NDArray[np.intp] | None, - copy: bool | None, - ) -> Series: - # Note: new_index is None iff indexer is None - # if not None, indexer is np.intp - if indexer is None and ( - new_index is None or new_index.names == self.index.names - ): - if using_copy_on_write(): - return self.copy(deep=copy) - if copy or copy is None: - return self.copy(deep=copy) - return self - - new_values = algorithms.take_nd( - self._values, indexer, allow_fill=True, fill_value=None - ) - return self._constructor(new_values, index=new_index, copy=False) - - def _needs_reindex_multi(self, axes, method, level) -> bool: - """ - Check if we do need a multi reindex; this is for compat with - higher dims. - """ - return False - - @overload - def rename( - self, - index: Renamer | Hashable | None = ..., - *, - axis: Axis | None = ..., - copy: bool = ..., - inplace: Literal[True], - level: Level | None = ..., - errors: IgnoreRaise = ..., - ) -> None: - ... - - @overload - def rename( - self, - index: Renamer | Hashable | None = ..., - *, - axis: Axis | None = ..., - copy: bool = ..., - inplace: Literal[False] = ..., - level: Level | None = ..., - errors: IgnoreRaise = ..., - ) -> Series: - ... - - @overload - def rename( - self, - index: Renamer | Hashable | None = ..., - *, - axis: Axis | None = ..., - copy: bool = ..., - inplace: bool = ..., - level: Level | None = ..., - errors: IgnoreRaise = ..., - ) -> Series | None: - ... - - def rename( - self, - index: Renamer | Hashable | None = None, - *, - axis: Axis | None = None, - copy: bool | None = None, - inplace: bool = False, - level: Level | None = None, - errors: IgnoreRaise = "ignore", - ) -> Series | None: - """ - Alter Series index labels or name. - - Function / dict values must be unique (1-to-1). Labels not contained in - a dict / Series will be left as-is. Extra labels listed don't throw an - error. - - Alternatively, change ``Series.name`` with a scalar value. - - See the :ref:`user guide ` for more. - - Parameters - ---------- - index : scalar, hashable sequence, dict-like or function optional - Functions or dict-like are transformations to apply to - the index. - Scalar or hashable sequence-like will alter the ``Series.name`` - attribute. - axis : {0 or 'index'} - Unused. Parameter needed for compatibility with DataFrame. - copy : bool, default True - Also copy underlying data. - inplace : bool, default False - Whether to return a new Series. If True the value of copy is ignored. - level : int or level name, default None - In case of MultiIndex, only rename labels in the specified level. - errors : {'ignore', 'raise'}, default 'ignore' - If 'raise', raise `KeyError` when a `dict-like mapper` or - `index` contains labels that are not present in the index being transformed. - If 'ignore', existing keys will be renamed and extra keys will be ignored. - - Returns - ------- - Series or None - Series with index labels or name altered or None if ``inplace=True``. - - See Also - -------- - DataFrame.rename : Corresponding DataFrame method. - Series.rename_axis : Set the name of the axis. - - Examples - -------- - >>> s = pd.Series([1, 2, 3]) - >>> s - 0 1 - 1 2 - 2 3 - dtype: int64 - >>> s.rename("my_name") # scalar, changes Series.name - 0 1 - 1 2 - 2 3 - Name: my_name, dtype: int64 - >>> s.rename(lambda x: x ** 2) # function, changes labels - 0 1 - 1 2 - 4 3 - dtype: int64 - >>> s.rename({1: 3, 2: 5}) # mapping, changes labels - 0 1 - 3 2 - 5 3 - dtype: int64 - """ - if axis is not None: - # Make sure we raise if an invalid 'axis' is passed. - axis = self._get_axis_number(axis) - - if callable(index) or is_dict_like(index): - # error: Argument 1 to "_rename" of "NDFrame" has incompatible - # type "Union[Union[Mapping[Any, Hashable], Callable[[Any], - # Hashable]], Hashable, None]"; expected "Union[Mapping[Any, - # Hashable], Callable[[Any], Hashable], None]" - return super()._rename( - index, # type: ignore[arg-type] - copy=copy, - inplace=inplace, - level=level, - errors=errors, - ) - else: - return self._set_name(index, inplace=inplace, deep=copy) - - @Appender( - """ - Examples - -------- - >>> s = pd.Series([1, 2, 3]) - >>> s - 0 1 - 1 2 - 2 3 - dtype: int64 - - >>> s.set_axis(['a', 'b', 'c'], axis=0) - a 1 - b 2 - c 3 - dtype: int64 - """ - ) - @Substitution( - klass=_shared_doc_kwargs["klass"], - axes_single_arg=_shared_doc_kwargs["axes_single_arg"], - extended_summary_sub="", - axis_description_sub="", - see_also_sub="", - ) - @Appender(NDFrame.set_axis.__doc__) - def set_axis( - self, - labels, - *, - axis: Axis = 0, - copy: bool | None = None, - ) -> Series: - return super().set_axis(labels, axis=axis, copy=copy) - - # error: Cannot determine type of 'reindex' - @doc( - NDFrame.reindex, # type: ignore[has-type] - klass=_shared_doc_kwargs["klass"], - optional_reindex=_shared_doc_kwargs["optional_reindex"], - ) - def reindex( # type: ignore[override] - self, - index=None, - *, - axis: Axis | None = None, - method: ReindexMethod | None = None, - copy: bool | None = None, - level: Level | None = None, - fill_value: Scalar | None = None, - limit: int | None = None, - tolerance=None, - ) -> Series: - return super().reindex( - index=index, - method=method, - copy=copy, - level=level, - fill_value=fill_value, - limit=limit, - tolerance=tolerance, - ) - - @doc(NDFrame.rename_axis) - def rename_axis( # type: ignore[override] - self, - mapper: IndexLabel | lib.NoDefault = lib.no_default, - *, - index=lib.no_default, - axis: Axis = 0, - copy: bool = True, - inplace: bool = False, - ) -> Self | None: - return super().rename_axis( - mapper=mapper, - index=index, - axis=axis, - copy=copy, - inplace=inplace, - ) - - @overload - def drop( - self, - labels: IndexLabel = ..., - *, - axis: Axis = ..., - index: IndexLabel = ..., - columns: IndexLabel = ..., - level: Level | None = ..., - inplace: Literal[True], - errors: IgnoreRaise = ..., - ) -> None: - ... - - @overload - def drop( - self, - labels: IndexLabel = ..., - *, - axis: Axis = ..., - index: IndexLabel = ..., - columns: IndexLabel = ..., - level: Level | None = ..., - inplace: Literal[False] = ..., - errors: IgnoreRaise = ..., - ) -> Series: - ... - - @overload - def drop( - self, - labels: IndexLabel = ..., - *, - axis: Axis = ..., - index: IndexLabel = ..., - columns: IndexLabel = ..., - level: Level | None = ..., - inplace: bool = ..., - errors: IgnoreRaise = ..., - ) -> Series | None: - ... - - def drop( - self, - labels: IndexLabel | None = None, - *, - axis: Axis = 0, - index: IndexLabel | None = None, - columns: IndexLabel | None = None, - level: Level | None = None, - inplace: bool = False, - errors: IgnoreRaise = "raise", - ) -> Series | None: - """ - Return Series with specified index labels removed. - - Remove elements of a Series based on specifying the index labels. - When using a multi-index, labels on different levels can be removed - by specifying the level. - - Parameters - ---------- - labels : single label or list-like - Index labels to drop. - axis : {0 or 'index'} - Unused. Parameter needed for compatibility with DataFrame. - index : single label or list-like - Redundant for application on Series, but 'index' can be used instead - of 'labels'. - columns : single label or list-like - No change is made to the Series; use 'index' or 'labels' instead. - level : int or level name, optional - For MultiIndex, level for which the labels will be removed. - inplace : bool, default False - If True, do operation inplace and return None. - errors : {'ignore', 'raise'}, default 'raise' - If 'ignore', suppress error and only existing labels are dropped. - - Returns - ------- - Series or None - Series with specified index labels removed or None if ``inplace=True``. - - Raises - ------ - KeyError - If none of the labels are found in the index. - - See Also - -------- - Series.reindex : Return only specified index labels of Series. - Series.dropna : Return series without null values. - Series.drop_duplicates : Return Series with duplicate values removed. - DataFrame.drop : Drop specified labels from rows or columns. - - Examples - -------- - >>> s = pd.Series(data=np.arange(3), index=['A', 'B', 'C']) - >>> s - A 0 - B 1 - C 2 - dtype: int64 - - Drop labels B en C - - >>> s.drop(labels=['B', 'C']) - A 0 - dtype: int64 - - Drop 2nd level label in MultiIndex Series - - >>> midx = pd.MultiIndex(levels=[['llama', 'cow', 'falcon'], - ... ['speed', 'weight', 'length']], - ... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2], - ... [0, 1, 2, 0, 1, 2, 0, 1, 2]]) - >>> s = pd.Series([45, 200, 1.2, 30, 250, 1.5, 320, 1, 0.3], - ... index=midx) - >>> s - llama speed 45.0 - weight 200.0 - length 1.2 - cow speed 30.0 - weight 250.0 - length 1.5 - falcon speed 320.0 - weight 1.0 - length 0.3 - dtype: float64 - - >>> s.drop(labels='weight', level=1) - llama speed 45.0 - length 1.2 - cow speed 30.0 - length 1.5 - falcon speed 320.0 - length 0.3 - dtype: float64 - """ - return super().drop( - labels=labels, - axis=axis, - index=index, - columns=columns, - level=level, - inplace=inplace, - errors=errors, - ) - - def pop(self, item: Hashable) -> Any: - """ - Return item and drops from series. Raise KeyError if not found. - - Parameters - ---------- - item : label - Index of the element that needs to be removed. - - Returns - ------- - Value that is popped from series. - - Examples - -------- - >>> ser = pd.Series([1,2,3]) - - >>> ser.pop(0) - 1 - - >>> ser - 1 2 - 2 3 - dtype: int64 - """ - return super().pop(item=item) - - @doc(INFO_DOCSTRING, **series_sub_kwargs) - def info( - self, - verbose: bool | None = None, - buf: IO[str] | None = None, - max_cols: int | None = None, - memory_usage: bool | str | None = None, - show_counts: bool = True, - ) -> None: - return SeriesInfo(self, memory_usage).render( - buf=buf, - max_cols=max_cols, - verbose=verbose, - show_counts=show_counts, - ) - - def _replace_single(self, to_replace, method: str, inplace: bool, limit): - """ - Replaces values in a Series using the fill method specified when no - replacement value is given in the replace method - """ - - result = self if inplace else self.copy() - - values = result._values - mask = missing.mask_missing(values, to_replace) - - if isinstance(values, ExtensionArray): - # dispatch to the EA's _pad_mask_inplace method - values._fill_mask_inplace(method, limit, mask) - else: - fill_f = missing.get_fill_func(method) - fill_f(values, limit=limit, mask=mask) - - if inplace: - return - return result - - def memory_usage(self, index: bool = True, deep: bool = False) -> int: - """ - Return the memory usage of the Series. - - The memory usage can optionally include the contribution of - the index and of elements of `object` dtype. - - Parameters - ---------- - index : bool, default True - Specifies whether to include the memory usage of the Series index. - deep : bool, default False - If True, introspect the data deeply by interrogating - `object` dtypes for system-level memory consumption, and include - it in the returned value. - - Returns - ------- - int - Bytes of memory consumed. - - See Also - -------- - numpy.ndarray.nbytes : Total bytes consumed by the elements of the - array. - DataFrame.memory_usage : Bytes consumed by a DataFrame. - - Examples - -------- - >>> s = pd.Series(range(3)) - >>> s.memory_usage() - 152 - - Not including the index gives the size of the rest of the data, which - is necessarily smaller: - - >>> s.memory_usage(index=False) - 24 - - The memory footprint of `object` values is ignored by default: - - >>> s = pd.Series(["a", "b"]) - >>> s.values - array(['a', 'b'], dtype=object) - >>> s.memory_usage() - 144 - >>> s.memory_usage(deep=True) - 244 - """ - v = self._memory_usage(deep=deep) - if index: - v += self.index.memory_usage(deep=deep) - return v - - def isin(self, values) -> Series: - """ - Whether elements in Series are contained in `values`. - - Return a boolean Series showing whether each element in the Series - matches an element in the passed sequence of `values` exactly. - - Parameters - ---------- - values : set or list-like - The sequence of values to test. Passing in a single string will - raise a ``TypeError``. Instead, turn a single string into a - list of one element. - - Returns - ------- - Series - Series of booleans indicating if each element is in values. - - Raises - ------ - TypeError - * If `values` is a string - - See Also - -------- - DataFrame.isin : Equivalent method on DataFrame. - - Examples - -------- - >>> s = pd.Series(['llama', 'cow', 'llama', 'beetle', 'llama', - ... 'hippo'], name='animal') - >>> s.isin(['cow', 'llama']) - 0 True - 1 True - 2 True - 3 False - 4 True - 5 False - Name: animal, dtype: bool - - To invert the boolean values, use the ``~`` operator: - - >>> ~s.isin(['cow', 'llama']) - 0 False - 1 False - 2 False - 3 True - 4 False - 5 True - Name: animal, dtype: bool - - Passing a single string as ``s.isin('llama')`` will raise an error. Use - a list of one element instead: - - >>> s.isin(['llama']) - 0 True - 1 False - 2 True - 3 False - 4 True - 5 False - Name: animal, dtype: bool - - Strings and integers are distinct and are therefore not comparable: - - >>> pd.Series([1]).isin(['1']) - 0 False - dtype: bool - >>> pd.Series([1.1]).isin(['1.1']) - 0 False - dtype: bool - """ - result = algorithms.isin(self._values, values) - return self._constructor(result, index=self.index, copy=False).__finalize__( - self, method="isin" - ) - - def between( - self, - left, - right, - inclusive: Literal["both", "neither", "left", "right"] = "both", - ) -> Series: - """ - Return boolean Series equivalent to left <= series <= right. - - This function returns a boolean vector containing `True` wherever the - corresponding Series element is between the boundary values `left` and - `right`. NA values are treated as `False`. - - Parameters - ---------- - left : scalar or list-like - Left boundary. - right : scalar or list-like - Right boundary. - inclusive : {"both", "neither", "left", "right"} - Include boundaries. Whether to set each bound as closed or open. - - .. versionchanged:: 1.3.0 - - Returns - ------- - Series - Series representing whether each element is between left and - right (inclusive). - - See Also - -------- - Series.gt : Greater than of series and other. - Series.lt : Less than of series and other. - - Notes - ----- - This function is equivalent to ``(left <= ser) & (ser <= right)`` - - Examples - -------- - >>> s = pd.Series([2, 0, 4, 8, np.nan]) - - Boundary values are included by default: - - >>> s.between(1, 4) - 0 True - 1 False - 2 True - 3 False - 4 False - dtype: bool - - With `inclusive` set to ``"neither"`` boundary values are excluded: - - >>> s.between(1, 4, inclusive="neither") - 0 True - 1 False - 2 False - 3 False - 4 False - dtype: bool - - `left` and `right` can be any scalar value: - - >>> s = pd.Series(['Alice', 'Bob', 'Carol', 'Eve']) - >>> s.between('Anna', 'Daniel') - 0 False - 1 True - 2 True - 3 False - dtype: bool - """ - if inclusive == "both": - lmask = self >= left - rmask = self <= right - elif inclusive == "left": - lmask = self >= left - rmask = self < right - elif inclusive == "right": - lmask = self > left - rmask = self <= right - elif inclusive == "neither": - lmask = self > left - rmask = self < right - else: - raise ValueError( - "Inclusive has to be either string of 'both'," - "'left', 'right', or 'neither'." - ) - - return lmask & rmask - - # ---------------------------------------------------------------------- - # Convert to types that support pd.NA - - def _convert_dtypes( - self, - infer_objects: bool = True, - convert_string: bool = True, - convert_integer: bool = True, - convert_boolean: bool = True, - convert_floating: bool = True, - dtype_backend: DtypeBackend = "numpy_nullable", - ) -> Series: - input_series = self - if infer_objects: - input_series = input_series.infer_objects() - if is_object_dtype(input_series.dtype): - input_series = input_series.copy(deep=None) - - if convert_string or convert_integer or convert_boolean or convert_floating: - inferred_dtype = convert_dtypes( - input_series._values, - convert_string, - convert_integer, - convert_boolean, - convert_floating, - infer_objects, - dtype_backend, - ) - result = input_series.astype(inferred_dtype) - else: - result = input_series.copy(deep=None) - return result - - # error: Cannot determine type of 'isna' - @doc(NDFrame.isna, klass=_shared_doc_kwargs["klass"]) # type: ignore[has-type] - def isna(self) -> Series: - return NDFrame.isna(self) - - # error: Cannot determine type of 'isna' - @doc(NDFrame.isna, klass=_shared_doc_kwargs["klass"]) # type: ignore[has-type] - def isnull(self) -> Series: - """ - Series.isnull is an alias for Series.isna. - """ - return super().isnull() - - # error: Cannot determine type of 'notna' - @doc(NDFrame.notna, klass=_shared_doc_kwargs["klass"]) # type: ignore[has-type] - def notna(self) -> Series: - return super().notna() - - # error: Cannot determine type of 'notna' - @doc(NDFrame.notna, klass=_shared_doc_kwargs["klass"]) # type: ignore[has-type] - def notnull(self) -> Series: - """ - Series.notnull is an alias for Series.notna. - """ - return super().notnull() - - @overload - def dropna( - self, - *, - axis: Axis = ..., - inplace: Literal[False] = ..., - how: AnyAll | None = ..., - ignore_index: bool = ..., - ) -> Series: - ... - - @overload - def dropna( - self, - *, - axis: Axis = ..., - inplace: Literal[True], - how: AnyAll | None = ..., - ignore_index: bool = ..., - ) -> None: - ... - - def dropna( - self, - *, - axis: Axis = 0, - inplace: bool = False, - how: AnyAll | None = None, - ignore_index: bool = False, - ) -> Series | None: - """ - Return a new Series with missing values removed. - - See the :ref:`User Guide ` for more on which values are - considered missing, and how to work with missing data. - - Parameters - ---------- - axis : {0 or 'index'} - Unused. Parameter needed for compatibility with DataFrame. - inplace : bool, default False - If True, do operation inplace and return None. - how : str, optional - Not in use. Kept for compatibility. - ignore_index : bool, default ``False`` - If ``True``, the resulting axis will be labeled 0, 1, …, n - 1. - - .. versionadded:: 2.0.0 - - Returns - ------- - Series or None - Series with NA entries dropped from it or None if ``inplace=True``. - - See Also - -------- - Series.isna: Indicate missing values. - Series.notna : Indicate existing (non-missing) values. - Series.fillna : Replace missing values. - DataFrame.dropna : Drop rows or columns which contain NA values. - Index.dropna : Drop missing indices. - - Examples - -------- - >>> ser = pd.Series([1., 2., np.nan]) - >>> ser - 0 1.0 - 1 2.0 - 2 NaN - dtype: float64 - - Drop NA values from a Series. - - >>> ser.dropna() - 0 1.0 - 1 2.0 - dtype: float64 - - Empty strings are not considered NA values. ``None`` is considered an - NA value. - - >>> ser = pd.Series([np.nan, 2, pd.NaT, '', None, 'I stay']) - >>> ser - 0 NaN - 1 2 - 2 NaT - 3 - 4 None - 5 I stay - dtype: object - >>> ser.dropna() - 1 2 - 3 - 5 I stay - dtype: object - """ - inplace = validate_bool_kwarg(inplace, "inplace") - ignore_index = validate_bool_kwarg(ignore_index, "ignore_index") - # Validate the axis parameter - self._get_axis_number(axis or 0) - - if self._can_hold_na: - result = remove_na_arraylike(self) - else: - if not inplace: - result = self.copy(deep=None) - else: - result = self - - if ignore_index: - result.index = default_index(len(result)) - - if inplace: - return self._update_inplace(result) - else: - return result - - # ---------------------------------------------------------------------- - # Time series-oriented methods - - def to_timestamp( - self, - freq=None, - how: Literal["s", "e", "start", "end"] = "start", - copy: bool | None = None, - ) -> Series: - """ - Cast to DatetimeIndex of Timestamps, at *beginning* of period. - - Parameters - ---------- - freq : str, default frequency of PeriodIndex - Desired frequency. - how : {'s', 'e', 'start', 'end'} - Convention for converting period to timestamp; start of period - vs. end. - copy : bool, default True - Whether or not to return a copy. - - Returns - ------- - Series with DatetimeIndex - - Examples - -------- - >>> idx = pd.PeriodIndex(['2023', '2024', '2025'], freq='Y') - >>> s1 = pd.Series([1, 2, 3], index=idx) - >>> s1 - 2023 1 - 2024 2 - 2025 3 - Freq: A-DEC, dtype: int64 - - The resulting frequency of the Timestamps is `YearBegin` - - >>> s1 = s1.to_timestamp() - >>> s1 - 2023-01-01 1 - 2024-01-01 2 - 2025-01-01 3 - Freq: AS-JAN, dtype: int64 - - Using `freq` which is the offset that the Timestamps will have - - >>> s2 = pd.Series([1, 2, 3], index=idx) - >>> s2 = s2.to_timestamp(freq='M') - >>> s2 - 2023-01-31 1 - 2024-01-31 2 - 2025-01-31 3 - Freq: A-JAN, dtype: int64 - """ - if not isinstance(self.index, PeriodIndex): - raise TypeError(f"unsupported Type {type(self.index).__name__}") - - new_obj = self.copy(deep=copy and not using_copy_on_write()) - new_index = self.index.to_timestamp(freq=freq, how=how) - setattr(new_obj, "index", new_index) - return new_obj - - def to_period(self, freq: str | None = None, copy: bool | None = None) -> Series: - """ - Convert Series from DatetimeIndex to PeriodIndex. - - Parameters - ---------- - freq : str, default None - Frequency associated with the PeriodIndex. - copy : bool, default True - Whether or not to return a copy. - - Returns - ------- - Series - Series with index converted to PeriodIndex. - - Examples - -------- - >>> idx = pd.DatetimeIndex(['2023', '2024', '2025']) - >>> s = pd.Series([1, 2, 3], index=idx) - >>> s = s.to_period() - >>> s - 2023 1 - 2024 2 - 2025 3 - Freq: A-DEC, dtype: int64 - - Viewing the index - - >>> s.index - PeriodIndex(['2023', '2024', '2025'], dtype='period[A-DEC]') - """ - if not isinstance(self.index, DatetimeIndex): - raise TypeError(f"unsupported Type {type(self.index).__name__}") - - new_obj = self.copy(deep=copy and not using_copy_on_write()) - new_index = self.index.to_period(freq=freq) - setattr(new_obj, "index", new_index) - return new_obj - - # ---------------------------------------------------------------------- - # Add index - _AXIS_ORDERS: list[Literal["index", "columns"]] = ["index"] - _AXIS_LEN = len(_AXIS_ORDERS) - _info_axis_number: Literal[0] = 0 - _info_axis_name: Literal["index"] = "index" - - index = properties.AxisProperty( - axis=0, - doc=""" - The index (axis labels) of the Series. - - The index of a Series is used to label and identify each element of the - underlying data. The index can be thought of as an immutable ordered set - (technically a multi-set, as it may contain duplicate labels), and is - used to index and align data in pandas. - - Returns - ------- - Index - The index labels of the Series. - - See Also - -------- - Series.reindex : Conform Series to new index. - Series.set_index : Set Series as DataFrame index. - Index : The base pandas index type. - - Notes - ----- - For more information on pandas indexing, see the `indexing user guide - `__. - - Examples - -------- - To create a Series with a custom index and view the index labels: - - >>> cities = ['Kolkata', 'Chicago', 'Toronto', 'Lisbon'] - >>> populations = [14.85, 2.71, 2.93, 0.51] - >>> city_series = pd.Series(populations, index=cities) - >>> city_series.index - Index(['Kolkata', 'Chicago', 'Toronto', 'Lisbon'], dtype='object') - - To change the index labels of an existing Series: - - >>> city_series.index = ['KOL', 'CHI', 'TOR', 'LIS'] - >>> city_series.index - Index(['KOL', 'CHI', 'TOR', 'LIS'], dtype='object') - """, - ) - - # ---------------------------------------------------------------------- - # Accessor Methods - # ---------------------------------------------------------------------- - str = CachedAccessor("str", StringMethods) - dt = CachedAccessor("dt", CombinedDatetimelikeProperties) - cat = CachedAccessor("cat", CategoricalAccessor) - plot = CachedAccessor("plot", pandas.plotting.PlotAccessor) - sparse = CachedAccessor("sparse", SparseAccessor) - - # ---------------------------------------------------------------------- - # Add plotting methods to Series - hist = pandas.plotting.hist_series - - # ---------------------------------------------------------------------- - # Template-Based Arithmetic/Comparison Methods - - def _cmp_method(self, other, op): - res_name = ops.get_op_result_name(self, other) - - if isinstance(other, Series) and not self._indexed_same(other): - raise ValueError("Can only compare identically-labeled Series objects") - - lvalues = self._values - rvalues = extract_array(other, extract_numpy=True, extract_range=True) - - res_values = ops.comparison_op(lvalues, rvalues, op) - - return self._construct_result(res_values, name=res_name) - - def _logical_method(self, other, op): - res_name = ops.get_op_result_name(self, other) - self, other = self._align_for_op(other, align_asobject=True) - - lvalues = self._values - rvalues = extract_array(other, extract_numpy=True, extract_range=True) - - res_values = ops.logical_op(lvalues, rvalues, op) - return self._construct_result(res_values, name=res_name) - - def _arith_method(self, other, op): - self, other = self._align_for_op(other) - return base.IndexOpsMixin._arith_method(self, other, op) - - def _align_for_op(self, right, align_asobject: bool = False): - """align lhs and rhs Series""" - # TODO: Different from DataFrame._align_for_op, list, tuple and ndarray - # are not coerced here - # because Series has inconsistencies described in GH#13637 - left = self - - if isinstance(right, Series): - # avoid repeated alignment - if not left.index.equals(right.index): - if align_asobject: - if left.dtype not in (object, np.bool_) or right.dtype not in ( - object, - np.bool_, - ): - warnings.warn( - "Operation between non boolean Series with different " - "indexes will no longer return a boolean result in " - "a future version. Cast both Series to object type " - "to maintain the prior behavior.", - FutureWarning, - stacklevel=find_stack_level(), - ) - # to keep original value's dtype for bool ops - left = left.astype(object) - right = right.astype(object) - - left, right = left.align(right, copy=False) - - return left, right - - def _binop(self, other: Series, func, level=None, fill_value=None) -> Series: - """ - Perform generic binary operation with optional fill value. - - Parameters - ---------- - other : Series - func : binary operator - fill_value : float or object - Value to substitute for NA/null values. If both Series are NA in a - location, the result will be NA regardless of the passed fill value. - level : int or level name, default None - Broadcast across a level, matching Index values on the - passed MultiIndex level. - - Returns - ------- - Series - """ - this = self - - if not self.index.equals(other.index): - this, other = self.align(other, level=level, join="outer", copy=False) - - this_vals, other_vals = ops.fill_binop(this._values, other._values, fill_value) - - with np.errstate(all="ignore"): - result = func(this_vals, other_vals) - - name = ops.get_op_result_name(self, other) - out = this._construct_result(result, name) - return cast(Series, out) - - def _construct_result( - self, result: ArrayLike | tuple[ArrayLike, ArrayLike], name: Hashable - ) -> Series | tuple[Series, Series]: - """ - Construct an appropriately-labelled Series from the result of an op. - - Parameters - ---------- - result : ndarray or ExtensionArray - name : Label - - Returns - ------- - Series - In the case of __divmod__ or __rdivmod__, a 2-tuple of Series. - """ - if isinstance(result, tuple): - # produced by divmod or rdivmod - - res1 = self._construct_result(result[0], name=name) - res2 = self._construct_result(result[1], name=name) - - # GH#33427 assertions to keep mypy happy - assert isinstance(res1, Series) - assert isinstance(res2, Series) - return (res1, res2) - - # TODO: result should always be ArrayLike, but this fails for some - # JSONArray tests - dtype = getattr(result, "dtype", None) - out = self._constructor(result, index=self.index, dtype=dtype, copy=False) - out = out.__finalize__(self) - - # Set the result's name after __finalize__ is called because __finalize__ - # would set it back to self.name - out.name = name - return out - - def _flex_method(self, other, op, *, level=None, fill_value=None, axis: Axis = 0): - if axis is not None: - self._get_axis_number(axis) - - res_name = ops.get_op_result_name(self, other) - - if isinstance(other, Series): - return self._binop(other, op, level=level, fill_value=fill_value) - elif isinstance(other, (np.ndarray, list, tuple)): - if len(other) != len(self): - raise ValueError("Lengths must be equal") - other = self._constructor(other, self.index, copy=False) - result = self._binop(other, op, level=level, fill_value=fill_value) - result._name = res_name - return result - else: - if fill_value is not None: - self = self.fillna(fill_value) - - return op(self, other) - - @Appender(ops.make_flex_doc("eq", "series")) - def eq(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.eq, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("ne", "series")) - def ne(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.ne, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("le", "series")) - def le(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.le, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("lt", "series")) - def lt(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.lt, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("ge", "series")) - def ge(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.ge, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("gt", "series")) - def gt(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.gt, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("add", "series")) - def add(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.add, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("radd", "series")) - def radd(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, roperator.radd, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("sub", "series")) - def sub(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.sub, level=level, fill_value=fill_value, axis=axis - ) - - subtract = sub - - @Appender(ops.make_flex_doc("rsub", "series")) - def rsub(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, roperator.rsub, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("mul", "series")) - def mul( - self, - other, - level: Level | None = None, - fill_value: float | None = None, - axis: Axis = 0, - ): - return self._flex_method( - other, operator.mul, level=level, fill_value=fill_value, axis=axis - ) - - multiply = mul - - @Appender(ops.make_flex_doc("rmul", "series")) - def rmul(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, roperator.rmul, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("truediv", "series")) - def truediv(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.truediv, level=level, fill_value=fill_value, axis=axis - ) - - div = truediv - divide = truediv - - @Appender(ops.make_flex_doc("rtruediv", "series")) - def rtruediv(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, roperator.rtruediv, level=level, fill_value=fill_value, axis=axis - ) - - rdiv = rtruediv - - @Appender(ops.make_flex_doc("floordiv", "series")) - def floordiv(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.floordiv, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("rfloordiv", "series")) - def rfloordiv(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, roperator.rfloordiv, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("mod", "series")) - def mod(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.mod, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("rmod", "series")) - def rmod(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, roperator.rmod, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("pow", "series")) - def pow(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, operator.pow, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("rpow", "series")) - def rpow(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, roperator.rpow, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("divmod", "series")) - def divmod(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, divmod, level=level, fill_value=fill_value, axis=axis - ) - - @Appender(ops.make_flex_doc("rdivmod", "series")) - def rdivmod(self, other, level=None, fill_value=None, axis: Axis = 0): - return self._flex_method( - other, roperator.rdivmod, level=level, fill_value=fill_value, axis=axis - ) - - # ---------------------------------------------------------------------- - # Reductions - - def _reduce( - self, - op, - # error: Variable "pandas.core.series.Series.str" is not valid as a type - name: str, # type: ignore[valid-type] - *, - axis: Axis = 0, - skipna: bool = True, - numeric_only: bool = False, - filter_type=None, - **kwds, - ): - """ - Perform a reduction operation. - - If we have an ndarray as a value, then simply perform the operation, - otherwise delegate to the object. - """ - delegate = self._values - - if axis is not None: - self._get_axis_number(axis) - - if isinstance(delegate, ExtensionArray): - # dispatch to ExtensionArray interface - return delegate._reduce(name, skipna=skipna, **kwds) - - else: - # dispatch to numpy arrays - if numeric_only and self.dtype.kind not in "iufcb": - # i.e. not is_numeric_dtype(self.dtype) - kwd_name = "numeric_only" - if name in ["any", "all"]: - kwd_name = "bool_only" - # GH#47500 - change to TypeError to match other methods - raise TypeError( - f"Series.{name} does not allow {kwd_name}={numeric_only} " - "with non-numeric dtypes." - ) - return op(delegate, skipna=skipna, **kwds) - - @Appender(make_doc("any", ndim=1)) - # error: Signature of "any" incompatible with supertype "NDFrame" - def any( # type: ignore[override] - self, - *, - axis: Axis = 0, - bool_only: bool = False, - skipna: bool = True, - **kwargs, - ) -> bool: - nv.validate_logical_func((), kwargs, fname="any") - validate_bool_kwarg(skipna, "skipna", none_allowed=False) - return self._reduce( - nanops.nanany, - name="any", - axis=axis, - numeric_only=bool_only, - skipna=skipna, - filter_type="bool", - ) - - @Appender(make_doc("all", ndim=1)) - def all( - self, - axis: Axis = 0, - bool_only: bool = False, - skipna: bool = True, - **kwargs, - ) -> bool: - nv.validate_logical_func((), kwargs, fname="all") - validate_bool_kwarg(skipna, "skipna", none_allowed=False) - return self._reduce( - nanops.nanall, - name="all", - axis=axis, - numeric_only=bool_only, - skipna=skipna, - filter_type="bool", - ) - - @doc(make_doc("min", ndim=1)) - def min( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - return NDFrame.min(self, axis, skipna, numeric_only, **kwargs) - - @doc(make_doc("max", ndim=1)) - def max( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - return NDFrame.max(self, axis, skipna, numeric_only, **kwargs) - - @doc(make_doc("sum", ndim=1)) - def sum( - self, - axis: Axis | None = None, - skipna: bool = True, - numeric_only: bool = False, - min_count: int = 0, - **kwargs, - ): - return NDFrame.sum(self, axis, skipna, numeric_only, min_count, **kwargs) - - @doc(make_doc("prod", ndim=1)) - def prod( - self, - axis: Axis | None = None, - skipna: bool = True, - numeric_only: bool = False, - min_count: int = 0, - **kwargs, - ): - return NDFrame.prod(self, axis, skipna, numeric_only, min_count, **kwargs) - - @doc(make_doc("mean", ndim=1)) - def mean( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - return NDFrame.mean(self, axis, skipna, numeric_only, **kwargs) - - @doc(make_doc("median", ndim=1)) - def median( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - return NDFrame.median(self, axis, skipna, numeric_only, **kwargs) - - @doc(make_doc("sem", ndim=1)) - def sem( - self, - axis: Axis | None = None, - skipna: bool = True, - ddof: int = 1, - numeric_only: bool = False, - **kwargs, - ): - return NDFrame.sem(self, axis, skipna, ddof, numeric_only, **kwargs) - - @doc(make_doc("var", ndim=1)) - def var( - self, - axis: Axis | None = None, - skipna: bool = True, - ddof: int = 1, - numeric_only: bool = False, - **kwargs, - ): - return NDFrame.var(self, axis, skipna, ddof, numeric_only, **kwargs) - - @doc(make_doc("std", ndim=1)) - def std( - self, - axis: Axis | None = None, - skipna: bool = True, - ddof: int = 1, - numeric_only: bool = False, - **kwargs, - ): - return NDFrame.std(self, axis, skipna, ddof, numeric_only, **kwargs) - - @doc(make_doc("skew", ndim=1)) - def skew( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - return NDFrame.skew(self, axis, skipna, numeric_only, **kwargs) - - @doc(make_doc("kurt", ndim=1)) - def kurt( - self, - axis: Axis | None = 0, - skipna: bool = True, - numeric_only: bool = False, - **kwargs, - ): - return NDFrame.kurt(self, axis, skipna, numeric_only, **kwargs) - - kurtosis = kurt - product = prod - - @doc(make_doc("cummin", ndim=1)) - def cummin(self, axis: Axis | None = None, skipna: bool = True, *args, **kwargs): - return NDFrame.cummin(self, axis, skipna, *args, **kwargs) - - @doc(make_doc("cummax", ndim=1)) - def cummax(self, axis: Axis | None = None, skipna: bool = True, *args, **kwargs): - return NDFrame.cummax(self, axis, skipna, *args, **kwargs) - - @doc(make_doc("cumsum", ndim=1)) - def cumsum(self, axis: Axis | None = None, skipna: bool = True, *args, **kwargs): - return NDFrame.cumsum(self, axis, skipna, *args, **kwargs) - - @doc(make_doc("cumprod", 1)) - def cumprod(self, axis: Axis | None = None, skipna: bool = True, *args, **kwargs): - return NDFrame.cumprod(self, axis, skipna, *args, **kwargs) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/strings/base.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/strings/base.py deleted file mode 100644 index 96b0352666b412cf36a7c9aecfc9ab42628e29df..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/strings/base.py +++ /dev/null @@ -1,262 +0,0 @@ -from __future__ import annotations - -import abc -from typing import ( - TYPE_CHECKING, - Callable, - Literal, -) - -import numpy as np - -if TYPE_CHECKING: - from collections.abc import Sequence - import re - - from pandas._typing import Scalar - - from pandas import Series - - -class BaseStringArrayMethods(abc.ABC): - """ - Base class for extension arrays implementing string methods. - - This is where our ExtensionArrays can override the implementation of - Series.str.. We don't expect this to work with - 3rd-party extension arrays. - - * User calls Series.str. - * pandas extracts the extension array from the Series - * pandas calls ``extension_array._str_(*args, **kwargs)`` - * pandas wraps the result, to return to the user. - - See :ref:`Series.str` for the docstring of each method. - """ - - def _str_getitem(self, key): - if isinstance(key, slice): - return self._str_slice(start=key.start, stop=key.stop, step=key.step) - else: - return self._str_get(key) - - @abc.abstractmethod - def _str_count(self, pat, flags: int = 0): - pass - - @abc.abstractmethod - def _str_pad( - self, - width: int, - side: Literal["left", "right", "both"] = "left", - fillchar: str = " ", - ): - pass - - @abc.abstractmethod - def _str_contains( - self, pat, case: bool = True, flags: int = 0, na=None, regex: bool = True - ): - pass - - @abc.abstractmethod - def _str_startswith(self, pat, na=None): - pass - - @abc.abstractmethod - def _str_endswith(self, pat, na=None): - pass - - @abc.abstractmethod - def _str_replace( - self, - pat: str | re.Pattern, - repl: str | Callable, - n: int = -1, - case: bool = True, - flags: int = 0, - regex: bool = True, - ): - pass - - @abc.abstractmethod - def _str_repeat(self, repeats: int | Sequence[int]): - pass - - @abc.abstractmethod - def _str_match( - self, pat: str, case: bool = True, flags: int = 0, na: Scalar = np.nan - ): - pass - - @abc.abstractmethod - def _str_fullmatch( - self, - pat: str | re.Pattern, - case: bool = True, - flags: int = 0, - na: Scalar = np.nan, - ): - pass - - @abc.abstractmethod - def _str_encode(self, encoding, errors: str = "strict"): - pass - - @abc.abstractmethod - def _str_find(self, sub, start: int = 0, end=None): - pass - - @abc.abstractmethod - def _str_rfind(self, sub, start: int = 0, end=None): - pass - - @abc.abstractmethod - def _str_findall(self, pat, flags: int = 0): - pass - - @abc.abstractmethod - def _str_get(self, i): - pass - - @abc.abstractmethod - def _str_index(self, sub, start: int = 0, end=None): - pass - - @abc.abstractmethod - def _str_rindex(self, sub, start: int = 0, end=None): - pass - - @abc.abstractmethod - def _str_join(self, sep: str): - pass - - @abc.abstractmethod - def _str_partition(self, sep: str, expand): - pass - - @abc.abstractmethod - def _str_rpartition(self, sep: str, expand): - pass - - @abc.abstractmethod - def _str_len(self): - pass - - @abc.abstractmethod - def _str_slice(self, start=None, stop=None, step=None): - pass - - @abc.abstractmethod - def _str_slice_replace(self, start=None, stop=None, repl=None): - pass - - @abc.abstractmethod - def _str_translate(self, table): - pass - - @abc.abstractmethod - def _str_wrap(self, width: int, **kwargs): - pass - - @abc.abstractmethod - def _str_get_dummies(self, sep: str = "|"): - pass - - @abc.abstractmethod - def _str_isalnum(self): - pass - - @abc.abstractmethod - def _str_isalpha(self): - pass - - @abc.abstractmethod - def _str_isdecimal(self): - pass - - @abc.abstractmethod - def _str_isdigit(self): - pass - - @abc.abstractmethod - def _str_islower(self): - pass - - @abc.abstractmethod - def _str_isnumeric(self): - pass - - @abc.abstractmethod - def _str_isspace(self): - pass - - @abc.abstractmethod - def _str_istitle(self): - pass - - @abc.abstractmethod - def _str_isupper(self): - pass - - @abc.abstractmethod - def _str_capitalize(self): - pass - - @abc.abstractmethod - def _str_casefold(self): - pass - - @abc.abstractmethod - def _str_title(self): - pass - - @abc.abstractmethod - def _str_swapcase(self): - pass - - @abc.abstractmethod - def _str_lower(self): - pass - - @abc.abstractmethod - def _str_upper(self): - pass - - @abc.abstractmethod - def _str_normalize(self, form): - pass - - @abc.abstractmethod - def _str_strip(self, to_strip=None): - pass - - @abc.abstractmethod - def _str_lstrip(self, to_strip=None): - pass - - @abc.abstractmethod - def _str_rstrip(self, to_strip=None): - pass - - @abc.abstractmethod - def _str_removeprefix(self, prefix: str) -> Series: - pass - - @abc.abstractmethod - def _str_removesuffix(self, suffix: str) -> Series: - pass - - @abc.abstractmethod - def _str_split( - self, pat=None, n=-1, expand: bool = False, regex: bool | None = None - ): - pass - - @abc.abstractmethod - def _str_rsplit(self, pat=None, n=-1): - pass - - @abc.abstractmethod - def _str_extract(self, pat: str, flags: int = 0, expand: bool = True): - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/formats/excel.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/formats/excel.py deleted file mode 100644 index 9970d465ced9d4c5eb3f0cd8bbd57d452171e14a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/formats/excel.py +++ /dev/null @@ -1,965 +0,0 @@ -""" -Utilities for conversion to writer-agnostic Excel representation. -""" -from __future__ import annotations - -from collections.abc import ( - Hashable, - Iterable, - Mapping, - Sequence, -) -import functools -import itertools -import re -from typing import ( - TYPE_CHECKING, - Any, - Callable, - cast, -) -import warnings - -import numpy as np - -from pandas._libs.lib import is_list_like -from pandas.util._decorators import doc -from pandas.util._exceptions import find_stack_level - -from pandas.core.dtypes import missing -from pandas.core.dtypes.common import ( - is_float, - is_scalar, -) - -from pandas import ( - DataFrame, - Index, - MultiIndex, - PeriodIndex, -) -import pandas.core.common as com -from pandas.core.shared_docs import _shared_docs - -from pandas.io.formats._color_data import CSS4_COLORS -from pandas.io.formats.css import ( - CSSResolver, - CSSWarning, -) -from pandas.io.formats.format import get_level_lengths -from pandas.io.formats.printing import pprint_thing - -if TYPE_CHECKING: - from pandas._typing import ( - FilePath, - IndexLabel, - StorageOptions, - WriteExcelBuffer, - ) - - from pandas import ExcelWriter - - -class ExcelCell: - __fields__ = ("row", "col", "val", "style", "mergestart", "mergeend") - __slots__ = __fields__ - - def __init__( - self, - row: int, - col: int, - val, - style=None, - mergestart: int | None = None, - mergeend: int | None = None, - ) -> None: - self.row = row - self.col = col - self.val = val - self.style = style - self.mergestart = mergestart - self.mergeend = mergeend - - -class CssExcelCell(ExcelCell): - def __init__( - self, - row: int, - col: int, - val, - style: dict | None, - css_styles: dict[tuple[int, int], list[tuple[str, Any]]] | None, - css_row: int, - css_col: int, - css_converter: Callable | None, - **kwargs, - ) -> None: - if css_styles and css_converter: - # Use dict to get only one (case-insensitive) declaration per property - declaration_dict = { - prop.lower(): val for prop, val in css_styles[css_row, css_col] - } - # Convert to frozenset for order-invariant caching - unique_declarations = frozenset(declaration_dict.items()) - style = css_converter(unique_declarations) - - super().__init__(row=row, col=col, val=val, style=style, **kwargs) - - -class CSSToExcelConverter: - """ - A callable for converting CSS declarations to ExcelWriter styles - - Supports parts of CSS 2.2, with minimal CSS 3.0 support (e.g. text-shadow), - focusing on font styling, backgrounds, borders and alignment. - - Operates by first computing CSS styles in a fairly generic - way (see :meth:`compute_css`) then determining Excel style - properties from CSS properties (see :meth:`build_xlstyle`). - - Parameters - ---------- - inherited : str, optional - CSS declarations understood to be the containing scope for the - CSS processed by :meth:`__call__`. - """ - - NAMED_COLORS = CSS4_COLORS - - VERTICAL_MAP = { - "top": "top", - "text-top": "top", - "middle": "center", - "baseline": "bottom", - "bottom": "bottom", - "text-bottom": "bottom", - # OpenXML also has 'justify', 'distributed' - } - - BOLD_MAP = { - "bold": True, - "bolder": True, - "600": True, - "700": True, - "800": True, - "900": True, - "normal": False, - "lighter": False, - "100": False, - "200": False, - "300": False, - "400": False, - "500": False, - } - - ITALIC_MAP = { - "normal": False, - "italic": True, - "oblique": True, - } - - FAMILY_MAP = { - "serif": 1, # roman - "sans-serif": 2, # swiss - "cursive": 4, # script - "fantasy": 5, # decorative - } - - BORDER_STYLE_MAP = { - style.lower(): style - for style in [ - "dashed", - "mediumDashDot", - "dashDotDot", - "hair", - "dotted", - "mediumDashDotDot", - "double", - "dashDot", - "slantDashDot", - "mediumDashed", - ] - } - - # NB: Most of the methods here could be classmethods, as only __init__ - # and __call__ make use of instance attributes. We leave them as - # instancemethods so that users can easily experiment with extensions - # without monkey-patching. - inherited: dict[str, str] | None - - def __init__(self, inherited: str | None = None) -> None: - if inherited is not None: - self.inherited = self.compute_css(inherited) - else: - self.inherited = None - # We should avoid cache on the __call__ method. - # Otherwise once the method __call__ has been called - # garbage collection no longer deletes the instance. - self._call_cached = functools.cache(self._call_uncached) - - compute_css = CSSResolver() - - def __call__( - self, declarations: str | frozenset[tuple[str, str]] - ) -> dict[str, dict[str, str]]: - """ - Convert CSS declarations to ExcelWriter style. - - Parameters - ---------- - declarations : str | frozenset[tuple[str, str]] - CSS string or set of CSS declaration tuples. - e.g. "font-weight: bold; background: blue" or - {("font-weight", "bold"), ("background", "blue")} - - Returns - ------- - xlstyle : dict - A style as interpreted by ExcelWriter when found in - ExcelCell.style. - """ - return self._call_cached(declarations) - - def _call_uncached( - self, declarations: str | frozenset[tuple[str, str]] - ) -> dict[str, dict[str, str]]: - properties = self.compute_css(declarations, self.inherited) - return self.build_xlstyle(properties) - - def build_xlstyle(self, props: Mapping[str, str]) -> dict[str, dict[str, str]]: - out = { - "alignment": self.build_alignment(props), - "border": self.build_border(props), - "fill": self.build_fill(props), - "font": self.build_font(props), - "number_format": self.build_number_format(props), - } - - # TODO: handle cell width and height: needs support in pandas.io.excel - - def remove_none(d: dict[str, str | None]) -> None: - """Remove key where value is None, through nested dicts""" - for k, v in list(d.items()): - if v is None: - del d[k] - elif isinstance(v, dict): - remove_none(v) - if not v: - del d[k] - - remove_none(out) - return out - - def build_alignment(self, props: Mapping[str, str]) -> dict[str, bool | str | None]: - # TODO: text-indent, padding-left -> alignment.indent - return { - "horizontal": props.get("text-align"), - "vertical": self._get_vertical_alignment(props), - "wrap_text": self._get_is_wrap_text(props), - } - - def _get_vertical_alignment(self, props: Mapping[str, str]) -> str | None: - vertical_align = props.get("vertical-align") - if vertical_align: - return self.VERTICAL_MAP.get(vertical_align) - return None - - def _get_is_wrap_text(self, props: Mapping[str, str]) -> bool | None: - if props.get("white-space") is None: - return None - return bool(props["white-space"] not in ("nowrap", "pre", "pre-line")) - - def build_border( - self, props: Mapping[str, str] - ) -> dict[str, dict[str, str | None]]: - return { - side: { - "style": self._border_style( - props.get(f"border-{side}-style"), - props.get(f"border-{side}-width"), - self.color_to_excel(props.get(f"border-{side}-color")), - ), - "color": self.color_to_excel(props.get(f"border-{side}-color")), - } - for side in ["top", "right", "bottom", "left"] - } - - def _border_style(self, style: str | None, width: str | None, color: str | None): - # convert styles and widths to openxml, one of: - # 'dashDot' - # 'dashDotDot' - # 'dashed' - # 'dotted' - # 'double' - # 'hair' - # 'medium' - # 'mediumDashDot' - # 'mediumDashDotDot' - # 'mediumDashed' - # 'slantDashDot' - # 'thick' - # 'thin' - if width is None and style is None and color is None: - # Return None will remove "border" from style dictionary - return None - - if width is None and style is None: - # Return "none" will keep "border" in style dictionary - return "none" - - if style in ("none", "hidden"): - return "none" - - width_name = self._get_width_name(width) - if width_name is None: - return "none" - - if style in (None, "groove", "ridge", "inset", "outset", "solid"): - # not handled - return width_name - - if style == "double": - return "double" - if style == "dotted": - if width_name in ("hair", "thin"): - return "dotted" - return "mediumDashDotDot" - if style == "dashed": - if width_name in ("hair", "thin"): - return "dashed" - return "mediumDashed" - elif style in self.BORDER_STYLE_MAP: - # Excel-specific styles - return self.BORDER_STYLE_MAP[style] - else: - warnings.warn( - f"Unhandled border style format: {repr(style)}", - CSSWarning, - stacklevel=find_stack_level(), - ) - return "none" - - def _get_width_name(self, width_input: str | None) -> str | None: - width = self._width_to_float(width_input) - if width < 1e-5: - return None - elif width < 1.3: - return "thin" - elif width < 2.8: - return "medium" - return "thick" - - def _width_to_float(self, width: str | None) -> float: - if width is None: - width = "2pt" - return self._pt_to_float(width) - - def _pt_to_float(self, pt_string: str) -> float: - assert pt_string.endswith("pt") - return float(pt_string.rstrip("pt")) - - def build_fill(self, props: Mapping[str, str]): - # TODO: perhaps allow for special properties - # -excel-pattern-bgcolor and -excel-pattern-type - fill_color = props.get("background-color") - if fill_color not in (None, "transparent", "none"): - return {"fgColor": self.color_to_excel(fill_color), "patternType": "solid"} - - def build_number_format(self, props: Mapping[str, str]) -> dict[str, str | None]: - fc = props.get("number-format") - fc = fc.replace("§", ";") if isinstance(fc, str) else fc - return {"format_code": fc} - - def build_font( - self, props: Mapping[str, str] - ) -> dict[str, bool | float | str | None]: - font_names = self._get_font_names(props) - decoration = self._get_decoration(props) - return { - "name": font_names[0] if font_names else None, - "family": self._select_font_family(font_names), - "size": self._get_font_size(props), - "bold": self._get_is_bold(props), - "italic": self._get_is_italic(props), - "underline": ("single" if "underline" in decoration else None), - "strike": ("line-through" in decoration) or None, - "color": self.color_to_excel(props.get("color")), - # shadow if nonzero digit before shadow color - "shadow": self._get_shadow(props), - } - - def _get_is_bold(self, props: Mapping[str, str]) -> bool | None: - weight = props.get("font-weight") - if weight: - return self.BOLD_MAP.get(weight) - return None - - def _get_is_italic(self, props: Mapping[str, str]) -> bool | None: - font_style = props.get("font-style") - if font_style: - return self.ITALIC_MAP.get(font_style) - return None - - def _get_decoration(self, props: Mapping[str, str]) -> Sequence[str]: - decoration = props.get("text-decoration") - if decoration is not None: - return decoration.split() - else: - return () - - def _get_underline(self, decoration: Sequence[str]) -> str | None: - if "underline" in decoration: - return "single" - return None - - def _get_shadow(self, props: Mapping[str, str]) -> bool | None: - if "text-shadow" in props: - return bool(re.search("^[^#(]*[1-9]", props["text-shadow"])) - return None - - def _get_font_names(self, props: Mapping[str, str]) -> Sequence[str]: - font_names_tmp = re.findall( - r"""(?x) - ( - "(?:[^"]|\\")+" - | - '(?:[^']|\\')+' - | - [^'",]+ - )(?=,|\s*$) - """, - props.get("font-family", ""), - ) - - font_names = [] - for name in font_names_tmp: - if name[:1] == '"': - name = name[1:-1].replace('\\"', '"') - elif name[:1] == "'": - name = name[1:-1].replace("\\'", "'") - else: - name = name.strip() - if name: - font_names.append(name) - return font_names - - def _get_font_size(self, props: Mapping[str, str]) -> float | None: - size = props.get("font-size") - if size is None: - return size - return self._pt_to_float(size) - - def _select_font_family(self, font_names: Sequence[str]) -> int | None: - family = None - for name in font_names: - family = self.FAMILY_MAP.get(name) - if family: - break - - return family - - def color_to_excel(self, val: str | None) -> str | None: - if val is None: - return None - - if self._is_hex_color(val): - return self._convert_hex_to_excel(val) - - try: - return self.NAMED_COLORS[val] - except KeyError: - warnings.warn( - f"Unhandled color format: {repr(val)}", - CSSWarning, - stacklevel=find_stack_level(), - ) - return None - - def _is_hex_color(self, color_string: str) -> bool: - return bool(color_string.startswith("#")) - - def _convert_hex_to_excel(self, color_string: str) -> str: - code = color_string.lstrip("#") - if self._is_shorthand_color(color_string): - return (code[0] * 2 + code[1] * 2 + code[2] * 2).upper() - else: - return code.upper() - - def _is_shorthand_color(self, color_string: str) -> bool: - """Check if color code is shorthand. - - #FFF is a shorthand as opposed to full #FFFFFF. - """ - code = color_string.lstrip("#") - if len(code) == 3: - return True - elif len(code) == 6: - return False - else: - raise ValueError(f"Unexpected color {color_string}") - - -class ExcelFormatter: - """ - Class for formatting a DataFrame to a list of ExcelCells, - - Parameters - ---------- - df : DataFrame or Styler - na_rep: na representation - float_format : str, default None - Format string for floating point numbers - cols : sequence, optional - Columns to write - header : bool or sequence of str, default True - Write out column names. If a list of string is given it is - assumed to be aliases for the column names - index : bool, default True - output row names (index) - index_label : str or sequence, default None - Column label for index column(s) if desired. If None is given, and - `header` and `index` are True, then the index names are used. A - sequence should be given if the DataFrame uses MultiIndex. - merge_cells : bool, default False - Format MultiIndex and Hierarchical Rows as merged cells. - inf_rep : str, default `'inf'` - representation for np.inf values (which aren't representable in Excel) - A `'-'` sign will be added in front of -inf. - style_converter : callable, optional - This translates Styler styles (CSS) into ExcelWriter styles. - Defaults to ``CSSToExcelConverter()``. - It should have signature css_declarations string -> excel style. - This is only called for body cells. - """ - - max_rows = 2**20 - max_cols = 2**14 - - def __init__( - self, - df, - na_rep: str = "", - float_format: str | None = None, - cols: Sequence[Hashable] | None = None, - header: Sequence[Hashable] | bool = True, - index: bool = True, - index_label: IndexLabel | None = None, - merge_cells: bool = False, - inf_rep: str = "inf", - style_converter: Callable | None = None, - ) -> None: - self.rowcounter = 0 - self.na_rep = na_rep - if not isinstance(df, DataFrame): - self.styler = df - self.styler._compute() # calculate applied styles - df = df.data - if style_converter is None: - style_converter = CSSToExcelConverter() - self.style_converter: Callable | None = style_converter - else: - self.styler = None - self.style_converter = None - self.df = df - if cols is not None: - # all missing, raise - if not len(Index(cols).intersection(df.columns)): - raise KeyError("passes columns are not ALL present dataframe") - - if len(Index(cols).intersection(df.columns)) != len(set(cols)): - # Deprecated in GH#17295, enforced in 1.0.0 - raise KeyError("Not all names specified in 'columns' are found") - - self.df = df.reindex(columns=cols) - - self.columns = self.df.columns - self.float_format = float_format - self.index = index - self.index_label = index_label - self.header = header - self.merge_cells = merge_cells - self.inf_rep = inf_rep - - @property - def header_style(self) -> dict[str, dict[str, str | bool]]: - return { - "font": {"bold": True}, - "borders": { - "top": "thin", - "right": "thin", - "bottom": "thin", - "left": "thin", - }, - "alignment": {"horizontal": "center", "vertical": "top"}, - } - - def _format_value(self, val): - if is_scalar(val) and missing.isna(val): - val = self.na_rep - elif is_float(val): - if missing.isposinf_scalar(val): - val = self.inf_rep - elif missing.isneginf_scalar(val): - val = f"-{self.inf_rep}" - elif self.float_format is not None: - val = float(self.float_format % val) - if getattr(val, "tzinfo", None) is not None: - raise ValueError( - "Excel does not support datetimes with " - "timezones. Please ensure that datetimes " - "are timezone unaware before writing to Excel." - ) - return val - - def _format_header_mi(self) -> Iterable[ExcelCell]: - if self.columns.nlevels > 1: - if not self.index: - raise NotImplementedError( - "Writing to Excel with MultiIndex columns and no " - "index ('index'=False) is not yet implemented." - ) - - if not (self._has_aliases or self.header): - return - - columns = self.columns - level_strs = columns.format( - sparsify=self.merge_cells, adjoin=False, names=False - ) - level_lengths = get_level_lengths(level_strs) - coloffset = 0 - lnum = 0 - - if self.index and isinstance(self.df.index, MultiIndex): - coloffset = len(self.df.index[0]) - 1 - - if self.merge_cells: - # Format multi-index as a merged cells. - for lnum, name in enumerate(columns.names): - yield ExcelCell( - row=lnum, - col=coloffset, - val=name, - style=self.header_style, - ) - - for lnum, (spans, levels, level_codes) in enumerate( - zip(level_lengths, columns.levels, columns.codes) - ): - values = levels.take(level_codes) - for i, span_val in spans.items(): - mergestart, mergeend = None, None - if span_val > 1: - mergestart, mergeend = lnum, coloffset + i + span_val - yield CssExcelCell( - row=lnum, - col=coloffset + i + 1, - val=values[i], - style=self.header_style, - css_styles=getattr(self.styler, "ctx_columns", None), - css_row=lnum, - css_col=i, - css_converter=self.style_converter, - mergestart=mergestart, - mergeend=mergeend, - ) - else: - # Format in legacy format with dots to indicate levels. - for i, values in enumerate(zip(*level_strs)): - v = ".".join(map(pprint_thing, values)) - yield CssExcelCell( - row=lnum, - col=coloffset + i + 1, - val=v, - style=self.header_style, - css_styles=getattr(self.styler, "ctx_columns", None), - css_row=lnum, - css_col=i, - css_converter=self.style_converter, - ) - - self.rowcounter = lnum - - def _format_header_regular(self) -> Iterable[ExcelCell]: - if self._has_aliases or self.header: - coloffset = 0 - - if self.index: - coloffset = 1 - if isinstance(self.df.index, MultiIndex): - coloffset = len(self.df.index.names) - - colnames = self.columns - if self._has_aliases: - self.header = cast(Sequence, self.header) - if len(self.header) != len(self.columns): - raise ValueError( - f"Writing {len(self.columns)} cols " - f"but got {len(self.header)} aliases" - ) - colnames = self.header - - for colindex, colname in enumerate(colnames): - yield CssExcelCell( - row=self.rowcounter, - col=colindex + coloffset, - val=colname, - style=self.header_style, - css_styles=getattr(self.styler, "ctx_columns", None), - css_row=0, - css_col=colindex, - css_converter=self.style_converter, - ) - - def _format_header(self) -> Iterable[ExcelCell]: - gen: Iterable[ExcelCell] - - if isinstance(self.columns, MultiIndex): - gen = self._format_header_mi() - else: - gen = self._format_header_regular() - - gen2: Iterable[ExcelCell] = () - - if self.df.index.names: - row = [x if x is not None else "" for x in self.df.index.names] + [ - "" - ] * len(self.columns) - if functools.reduce(lambda x, y: x and y, (x != "" for x in row)): - gen2 = ( - ExcelCell(self.rowcounter, colindex, val, self.header_style) - for colindex, val in enumerate(row) - ) - self.rowcounter += 1 - return itertools.chain(gen, gen2) - - def _format_body(self) -> Iterable[ExcelCell]: - if isinstance(self.df.index, MultiIndex): - return self._format_hierarchical_rows() - else: - return self._format_regular_rows() - - def _format_regular_rows(self) -> Iterable[ExcelCell]: - if self._has_aliases or self.header: - self.rowcounter += 1 - - # output index and index_label? - if self.index: - # check aliases - # if list only take first as this is not a MultiIndex - if self.index_label and isinstance( - self.index_label, (list, tuple, np.ndarray, Index) - ): - index_label = self.index_label[0] - # if string good to go - elif self.index_label and isinstance(self.index_label, str): - index_label = self.index_label - else: - index_label = self.df.index.names[0] - - if isinstance(self.columns, MultiIndex): - self.rowcounter += 1 - - if index_label and self.header is not False: - yield ExcelCell(self.rowcounter - 1, 0, index_label, self.header_style) - - # write index_values - index_values = self.df.index - if isinstance(self.df.index, PeriodIndex): - index_values = self.df.index.to_timestamp() - - for idx, idxval in enumerate(index_values): - yield CssExcelCell( - row=self.rowcounter + idx, - col=0, - val=idxval, - style=self.header_style, - css_styles=getattr(self.styler, "ctx_index", None), - css_row=idx, - css_col=0, - css_converter=self.style_converter, - ) - coloffset = 1 - else: - coloffset = 0 - - yield from self._generate_body(coloffset) - - def _format_hierarchical_rows(self) -> Iterable[ExcelCell]: - if self._has_aliases or self.header: - self.rowcounter += 1 - - gcolidx = 0 - - if self.index: - index_labels = self.df.index.names - # check for aliases - if self.index_label and isinstance( - self.index_label, (list, tuple, np.ndarray, Index) - ): - index_labels = self.index_label - - # MultiIndex columns require an extra row - # with index names (blank if None) for - # unambiguous round-trip, unless not merging, - # in which case the names all go on one row Issue #11328 - if isinstance(self.columns, MultiIndex) and self.merge_cells: - self.rowcounter += 1 - - # if index labels are not empty go ahead and dump - if com.any_not_none(*index_labels) and self.header is not False: - for cidx, name in enumerate(index_labels): - yield ExcelCell(self.rowcounter - 1, cidx, name, self.header_style) - - if self.merge_cells: - # Format hierarchical rows as merged cells. - level_strs = self.df.index.format( - sparsify=True, adjoin=False, names=False - ) - level_lengths = get_level_lengths(level_strs) - - for spans, levels, level_codes in zip( - level_lengths, self.df.index.levels, self.df.index.codes - ): - values = levels.take( - level_codes, - allow_fill=levels._can_hold_na, - fill_value=levels._na_value, - ) - - for i, span_val in spans.items(): - mergestart, mergeend = None, None - if span_val > 1: - mergestart = self.rowcounter + i + span_val - 1 - mergeend = gcolidx - yield CssExcelCell( - row=self.rowcounter + i, - col=gcolidx, - val=values[i], - style=self.header_style, - css_styles=getattr(self.styler, "ctx_index", None), - css_row=i, - css_col=gcolidx, - css_converter=self.style_converter, - mergestart=mergestart, - mergeend=mergeend, - ) - gcolidx += 1 - - else: - # Format hierarchical rows with non-merged values. - for indexcolvals in zip(*self.df.index): - for idx, indexcolval in enumerate(indexcolvals): - yield CssExcelCell( - row=self.rowcounter + idx, - col=gcolidx, - val=indexcolval, - style=self.header_style, - css_styles=getattr(self.styler, "ctx_index", None), - css_row=idx, - css_col=gcolidx, - css_converter=self.style_converter, - ) - gcolidx += 1 - - yield from self._generate_body(gcolidx) - - @property - def _has_aliases(self) -> bool: - """Whether the aliases for column names are present.""" - return is_list_like(self.header) - - def _generate_body(self, coloffset: int) -> Iterable[ExcelCell]: - # Write the body of the frame data series by series. - for colidx in range(len(self.columns)): - series = self.df.iloc[:, colidx] - for i, val in enumerate(series): - yield CssExcelCell( - row=self.rowcounter + i, - col=colidx + coloffset, - val=val, - style=None, - css_styles=getattr(self.styler, "ctx", None), - css_row=i, - css_col=colidx, - css_converter=self.style_converter, - ) - - def get_formatted_cells(self) -> Iterable[ExcelCell]: - for cell in itertools.chain(self._format_header(), self._format_body()): - cell.val = self._format_value(cell.val) - yield cell - - @doc(storage_options=_shared_docs["storage_options"]) - def write( - self, - writer: FilePath | WriteExcelBuffer | ExcelWriter, - sheet_name: str = "Sheet1", - startrow: int = 0, - startcol: int = 0, - freeze_panes: tuple[int, int] | None = None, - engine: str | None = None, - storage_options: StorageOptions | None = None, - engine_kwargs: dict | None = None, - ) -> None: - """ - writer : path-like, file-like, or ExcelWriter object - File path or existing ExcelWriter - sheet_name : str, default 'Sheet1' - Name of sheet which will contain DataFrame - startrow : - upper left cell row to dump data frame - startcol : - upper left cell column to dump data frame - freeze_panes : tuple of integer (length 2), default None - Specifies the one-based bottommost row and rightmost column that - is to be frozen - engine : string, default None - write engine to use if writer is a path - you can also set this - via the options ``io.excel.xlsx.writer``, - or ``io.excel.xlsm.writer``. - - {storage_options} - - .. versionadded:: 1.2.0 - engine_kwargs: dict, optional - Arbitrary keyword arguments passed to excel engine. - """ - from pandas.io.excel import ExcelWriter - - num_rows, num_cols = self.df.shape - if num_rows > self.max_rows or num_cols > self.max_cols: - raise ValueError( - f"This sheet is too large! Your sheet size is: {num_rows}, {num_cols} " - f"Max sheet size is: {self.max_rows}, {self.max_cols}" - ) - - if engine_kwargs is None: - engine_kwargs = {} - - formatted_cells = self.get_formatted_cells() - if isinstance(writer, ExcelWriter): - need_save = False - else: - # error: Cannot instantiate abstract class 'ExcelWriter' with abstract - # attributes 'engine', 'save', 'supported_extensions' and 'write_cells' - writer = ExcelWriter( # type: ignore[abstract] - writer, - engine=engine, - storage_options=storage_options, - engine_kwargs=engine_kwargs, - ) - need_save = True - - try: - writer._write_cells( - formatted_cells, - sheet_name, - startrow=startrow, - startcol=startcol, - freeze_panes=freeze_panes, - ) - finally: - # make sure to close opened file handles - if need_save: - writer.close() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/base/common.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/base/common.py deleted file mode 100644 index ad0b394105742ca5de92a03a3da2c569c38da469..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/base/common.py +++ /dev/null @@ -1,9 +0,0 @@ -from typing import Any - -from pandas import Index - - -def allow_na_ops(obj: Any) -> bool: - """Whether to skip test cases including NaN""" - is_bool_index = isinstance(obj, Index) and obj.inferred_type == "boolean" - return not is_bool_index and obj._can_hold_na diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_reindex.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_reindex.py deleted file mode 100644 index 0858e33a989b78e2cf419c5a1004a1e9ec55f024..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_reindex.py +++ /dev/null @@ -1,1314 +0,0 @@ -from datetime import ( - datetime, - timedelta, -) -import inspect - -import numpy as np -import pytest - -from pandas._libs.tslibs.timezones import dateutil_gettz as gettz -from pandas.compat import ( - IS64, - is_platform_windows, -) -import pandas.util._test_decorators as td - -import pandas as pd -from pandas import ( - Categorical, - CategoricalIndex, - DataFrame, - Index, - MultiIndex, - Series, - date_range, - isna, -) -import pandas._testing as tm -from pandas.api.types import CategoricalDtype as CDT - - -class TestReindexSetIndex: - # Tests that check both reindex and set_index - - def test_dti_set_index_reindex_datetimeindex(self): - # GH#6631 - df = DataFrame(np.random.default_rng(2).random(6)) - idx1 = date_range("2011/01/01", periods=6, freq="M", tz="US/Eastern") - idx2 = date_range("2013", periods=6, freq="A", tz="Asia/Tokyo") - - df = df.set_index(idx1) - tm.assert_index_equal(df.index, idx1) - df = df.reindex(idx2) - tm.assert_index_equal(df.index, idx2) - - def test_dti_set_index_reindex_freq_with_tz(self): - # GH#11314 with tz - index = date_range( - datetime(2015, 10, 1), datetime(2015, 10, 1, 23), freq="H", tz="US/Eastern" - ) - df = DataFrame( - np.random.default_rng(2).standard_normal((24, 1)), - columns=["a"], - index=index, - ) - new_index = date_range( - datetime(2015, 10, 2), datetime(2015, 10, 2, 23), freq="H", tz="US/Eastern" - ) - - result = df.set_index(new_index) - assert result.index.freq == index.freq - - def test_set_reset_index_intervalindex(self): - df = DataFrame({"A": range(10)}) - ser = pd.cut(df.A, 5) - df["B"] = ser - df = df.set_index("B") - - df = df.reset_index() - - def test_setitem_reset_index_dtypes(self): - # GH 22060 - df = DataFrame(columns=["a", "b", "c"]).astype( - {"a": "datetime64[ns]", "b": np.int64, "c": np.float64} - ) - df1 = df.set_index(["a"]) - df1["d"] = [] - result = df1.reset_index() - expected = DataFrame(columns=["a", "b", "c", "d"], index=range(0)).astype( - {"a": "datetime64[ns]", "b": np.int64, "c": np.float64, "d": np.float64} - ) - tm.assert_frame_equal(result, expected) - - df2 = df.set_index(["a", "b"]) - df2["d"] = [] - result = df2.reset_index() - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize( - "timezone, year, month, day, hour", - [["America/Chicago", 2013, 11, 3, 1], ["America/Santiago", 2021, 4, 3, 23]], - ) - def test_reindex_timestamp_with_fold(self, timezone, year, month, day, hour): - # see gh-40817 - test_timezone = gettz(timezone) - transition_1 = pd.Timestamp( - year=year, - month=month, - day=day, - hour=hour, - minute=0, - fold=0, - tzinfo=test_timezone, - ) - transition_2 = pd.Timestamp( - year=year, - month=month, - day=day, - hour=hour, - minute=0, - fold=1, - tzinfo=test_timezone, - ) - df = ( - DataFrame({"index": [transition_1, transition_2], "vals": ["a", "b"]}) - .set_index("index") - .reindex(["1", "2"]) - ) - exp = DataFrame({"index": ["1", "2"], "vals": [np.nan, np.nan]}).set_index( - "index" - ) - exp = exp.astype(object) - tm.assert_frame_equal( - df, - exp, - ) - - -class TestDataFrameSelectReindex: - # These are specific reindex-based tests; other indexing tests should go in - # test_indexing - - @pytest.mark.xfail( - not IS64 or is_platform_windows(), - reason="Passes int32 values to DatetimeArray in make_na_array on " - "windows, 32bit linux builds", - ) - @td.skip_array_manager_not_yet_implemented - def test_reindex_tzaware_fill_value(self): - # GH#52586 - df = DataFrame([[1]]) - - ts = pd.Timestamp("2023-04-10 17:32", tz="US/Pacific") - res = df.reindex([0, 1], axis=1, fill_value=ts) - assert res.dtypes[1] == pd.DatetimeTZDtype(unit="s", tz="US/Pacific") - expected = DataFrame({0: [1], 1: [ts]}) - expected[1] = expected[1].astype(res.dtypes[1]) - tm.assert_frame_equal(res, expected) - - per = ts.tz_localize(None).to_period("s") - res = df.reindex([0, 1], axis=1, fill_value=per) - assert res.dtypes[1] == pd.PeriodDtype("s") - expected = DataFrame({0: [1], 1: [per]}) - tm.assert_frame_equal(res, expected) - - interval = pd.Interval(ts, ts + pd.Timedelta(seconds=1)) - res = df.reindex([0, 1], axis=1, fill_value=interval) - assert res.dtypes[1] == pd.IntervalDtype("datetime64[s, US/Pacific]", "right") - expected = DataFrame({0: [1], 1: [interval]}) - expected[1] = expected[1].astype(res.dtypes[1]) - tm.assert_frame_equal(res, expected) - - def test_reindex_copies(self): - # based on asv time_reindex_axis1 - N = 10 - df = DataFrame(np.random.default_rng(2).standard_normal((N * 10, N))) - cols = np.arange(N) - np.random.default_rng(2).shuffle(cols) - - result = df.reindex(columns=cols, copy=True) - assert not np.shares_memory(result[0]._values, df[0]._values) - - # pass both columns and index - result2 = df.reindex(columns=cols, index=df.index, copy=True) - assert not np.shares_memory(result2[0]._values, df[0]._values) - - def test_reindex_copies_ea(self, using_copy_on_write): - # https://github.com/pandas-dev/pandas/pull/51197 - # also ensure to honor copy keyword for ExtensionDtypes - N = 10 - df = DataFrame( - np.random.default_rng(2).standard_normal((N * 10, N)), dtype="Float64" - ) - cols = np.arange(N) - np.random.default_rng(2).shuffle(cols) - - result = df.reindex(columns=cols, copy=True) - if using_copy_on_write: - assert np.shares_memory(result[0].array._data, df[0].array._data) - else: - assert not np.shares_memory(result[0].array._data, df[0].array._data) - - # pass both columns and index - result2 = df.reindex(columns=cols, index=df.index, copy=True) - if using_copy_on_write: - assert np.shares_memory(result2[0].array._data, df[0].array._data) - else: - assert not np.shares_memory(result2[0].array._data, df[0].array._data) - - @td.skip_array_manager_not_yet_implemented - def test_reindex_date_fill_value(self): - # passing date to dt64 is deprecated; enforced in 2.0 to cast to object - arr = date_range("2016-01-01", periods=6).values.reshape(3, 2) - df = DataFrame(arr, columns=["A", "B"], index=range(3)) - - ts = df.iloc[0, 0] - fv = ts.date() - - res = df.reindex(index=range(4), columns=["A", "B", "C"], fill_value=fv) - - expected = DataFrame( - {"A": df["A"].tolist() + [fv], "B": df["B"].tolist() + [fv], "C": [fv] * 4}, - dtype=object, - ) - tm.assert_frame_equal(res, expected) - - # only reindexing rows - res = df.reindex(index=range(4), fill_value=fv) - tm.assert_frame_equal(res, expected[["A", "B"]]) - - # same with a datetime-castable str - res = df.reindex( - index=range(4), columns=["A", "B", "C"], fill_value="2016-01-01" - ) - expected = DataFrame( - {"A": df["A"].tolist() + [ts], "B": df["B"].tolist() + [ts], "C": [ts] * 4}, - ) - tm.assert_frame_equal(res, expected) - - def test_reindex_with_multi_index(self): - # https://github.com/pandas-dev/pandas/issues/29896 - # tests for reindexing a multi-indexed DataFrame with a new MultiIndex - # - # confirms that we can reindex a multi-indexed DataFrame with a new - # MultiIndex object correctly when using no filling, backfilling, and - # padding - # - # The DataFrame, `df`, used in this test is: - # c - # a b - # -1 0 A - # 1 B - # 2 C - # 3 D - # 4 E - # 5 F - # 6 G - # 0 0 A - # 1 B - # 2 C - # 3 D - # 4 E - # 5 F - # 6 G - # 1 0 A - # 1 B - # 2 C - # 3 D - # 4 E - # 5 F - # 6 G - # - # and the other MultiIndex, `new_multi_index`, is: - # 0: 0 0.5 - # 1: 2.0 - # 2: 5.0 - # 3: 5.8 - df = DataFrame( - { - "a": [-1] * 7 + [0] * 7 + [1] * 7, - "b": list(range(7)) * 3, - "c": ["A", "B", "C", "D", "E", "F", "G"] * 3, - } - ).set_index(["a", "b"]) - new_index = [0.5, 2.0, 5.0, 5.8] - new_multi_index = MultiIndex.from_product([[0], new_index], names=["a", "b"]) - - # reindexing w/o a `method` value - reindexed = df.reindex(new_multi_index) - expected = DataFrame( - {"a": [0] * 4, "b": new_index, "c": [np.nan, "C", "F", np.nan]} - ).set_index(["a", "b"]) - tm.assert_frame_equal(expected, reindexed) - - # reindexing with backfilling - expected = DataFrame( - {"a": [0] * 4, "b": new_index, "c": ["B", "C", "F", "G"]} - ).set_index(["a", "b"]) - reindexed_with_backfilling = df.reindex(new_multi_index, method="bfill") - tm.assert_frame_equal(expected, reindexed_with_backfilling) - - reindexed_with_backfilling = df.reindex(new_multi_index, method="backfill") - tm.assert_frame_equal(expected, reindexed_with_backfilling) - - # reindexing with padding - expected = DataFrame( - {"a": [0] * 4, "b": new_index, "c": ["A", "C", "F", "F"]} - ).set_index(["a", "b"]) - reindexed_with_padding = df.reindex(new_multi_index, method="pad") - tm.assert_frame_equal(expected, reindexed_with_padding) - - reindexed_with_padding = df.reindex(new_multi_index, method="ffill") - tm.assert_frame_equal(expected, reindexed_with_padding) - - @pytest.mark.parametrize( - "method,expected_values", - [ - ("nearest", [0, 1, 1, 2]), - ("pad", [np.nan, 0, 1, 1]), - ("backfill", [0, 1, 2, 2]), - ], - ) - def test_reindex_methods(self, method, expected_values): - df = DataFrame({"x": list(range(5))}) - target = np.array([-0.1, 0.9, 1.1, 1.5]) - - expected = DataFrame({"x": expected_values}, index=target) - actual = df.reindex(target, method=method) - tm.assert_frame_equal(expected, actual) - - actual = df.reindex(target, method=method, tolerance=1) - tm.assert_frame_equal(expected, actual) - actual = df.reindex(target, method=method, tolerance=[1, 1, 1, 1]) - tm.assert_frame_equal(expected, actual) - - e2 = expected[::-1] - actual = df.reindex(target[::-1], method=method) - tm.assert_frame_equal(e2, actual) - - new_order = [3, 0, 2, 1] - e2 = expected.iloc[new_order] - actual = df.reindex(target[new_order], method=method) - tm.assert_frame_equal(e2, actual) - - switched_method = ( - "pad" if method == "backfill" else "backfill" if method == "pad" else method - ) - actual = df[::-1].reindex(target, method=switched_method) - tm.assert_frame_equal(expected, actual) - - def test_reindex_methods_nearest_special(self): - df = DataFrame({"x": list(range(5))}) - target = np.array([-0.1, 0.9, 1.1, 1.5]) - - expected = DataFrame({"x": [0, 1, 1, np.nan]}, index=target) - actual = df.reindex(target, method="nearest", tolerance=0.2) - tm.assert_frame_equal(expected, actual) - - expected = DataFrame({"x": [0, np.nan, 1, np.nan]}, index=target) - actual = df.reindex(target, method="nearest", tolerance=[0.5, 0.01, 0.4, 0.1]) - tm.assert_frame_equal(expected, actual) - - def test_reindex_nearest_tz(self, tz_aware_fixture): - # GH26683 - tz = tz_aware_fixture - idx = date_range("2019-01-01", periods=5, tz=tz) - df = DataFrame({"x": list(range(5))}, index=idx) - - expected = df.head(3) - actual = df.reindex(idx[:3], method="nearest") - tm.assert_frame_equal(expected, actual) - - def test_reindex_nearest_tz_empty_frame(self): - # https://github.com/pandas-dev/pandas/issues/31964 - dti = pd.DatetimeIndex(["2016-06-26 14:27:26+00:00"]) - df = DataFrame(index=pd.DatetimeIndex(["2016-07-04 14:00:59+00:00"])) - expected = DataFrame(index=dti) - result = df.reindex(dti, method="nearest") - tm.assert_frame_equal(result, expected) - - def test_reindex_frame_add_nat(self): - rng = date_range("1/1/2000 00:00:00", periods=10, freq="10s") - df = DataFrame( - {"A": np.random.default_rng(2).standard_normal(len(rng)), "B": rng} - ) - - result = df.reindex(range(15)) - assert np.issubdtype(result["B"].dtype, np.dtype("M8[ns]")) - - mask = isna(result)["B"] - assert mask[-5:].all() - assert not mask[:-5].any() - - @pytest.mark.parametrize( - "method, exp_values", - [("ffill", [0, 1, 2, 3]), ("bfill", [1.0, 2.0, 3.0, np.nan])], - ) - def test_reindex_frame_tz_ffill_bfill(self, frame_or_series, method, exp_values): - # GH#38566 - obj = frame_or_series( - [0, 1, 2, 3], - index=date_range("2020-01-01 00:00:00", periods=4, freq="H", tz="UTC"), - ) - new_index = date_range("2020-01-01 00:01:00", periods=4, freq="H", tz="UTC") - result = obj.reindex(new_index, method=method, tolerance=pd.Timedelta("1 hour")) - expected = frame_or_series(exp_values, index=new_index) - tm.assert_equal(result, expected) - - def test_reindex_limit(self): - # GH 28631 - data = [["A", "A", "A"], ["B", "B", "B"], ["C", "C", "C"], ["D", "D", "D"]] - exp_data = [ - ["A", "A", "A"], - ["B", "B", "B"], - ["C", "C", "C"], - ["D", "D", "D"], - ["D", "D", "D"], - [np.nan, np.nan, np.nan], - ] - df = DataFrame(data) - result = df.reindex([0, 1, 2, 3, 4, 5], method="ffill", limit=1) - expected = DataFrame(exp_data) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize( - "idx, check_index_type", - [ - [["C", "B", "A"], True], - [["F", "C", "A", "D"], True], - [["A"], True], - [["A", "B", "C"], True], - [["C", "A", "B"], True], - [["C", "B"], True], - [["C", "A"], True], - [["A", "B"], True], - [["B", "A", "C"], True], - # reindex by these causes different MultiIndex levels - [["D", "F"], False], - [["A", "C", "B"], False], - ], - ) - def test_reindex_level_verify_first_level(self, idx, check_index_type): - df = DataFrame( - { - "jim": list("B" * 4 + "A" * 2 + "C" * 3), - "joe": list("abcdeabcd")[::-1], - "jolie": [10, 20, 30] * 3, - "joline": np.random.default_rng(2).integers(0, 1000, 9), - } - ) - icol = ["jim", "joe", "jolie"] - - def f(val): - return np.nonzero((df["jim"] == val).to_numpy())[0] - - i = np.concatenate(list(map(f, idx))) - left = df.set_index(icol).reindex(idx, level="jim") - right = df.iloc[i].set_index(icol) - tm.assert_frame_equal(left, right, check_index_type=check_index_type) - - @pytest.mark.parametrize( - "idx", - [ - ("mid",), - ("mid", "btm"), - ("mid", "btm", "top"), - ("mid",), - ("mid", "top"), - ("mid", "top", "btm"), - ("btm",), - ("btm", "mid"), - ("btm", "mid", "top"), - ("btm",), - ("btm", "top"), - ("btm", "top", "mid"), - ("top",), - ("top", "mid"), - ("top", "mid", "btm"), - ("top",), - ("top", "btm"), - ("top", "btm", "mid"), - ], - ) - def test_reindex_level_verify_first_level_repeats(self, idx): - df = DataFrame( - { - "jim": ["mid"] * 5 + ["btm"] * 8 + ["top"] * 7, - "joe": ["3rd"] * 2 - + ["1st"] * 3 - + ["2nd"] * 3 - + ["1st"] * 2 - + ["3rd"] * 3 - + ["1st"] * 2 - + ["3rd"] * 3 - + ["2nd"] * 2, - # this needs to be jointly unique with jim and joe or - # reindexing will fail ~1.5% of the time, this works - # out to needing unique groups of same size as joe - "jolie": np.concatenate( - [ - np.random.default_rng(2).choice(1000, x, replace=False) - for x in [2, 3, 3, 2, 3, 2, 3, 2] - ] - ), - "joline": np.random.default_rng(2).standard_normal(20).round(3) * 10, - } - ) - icol = ["jim", "joe", "jolie"] - - def f(val): - return np.nonzero((df["jim"] == val).to_numpy())[0] - - i = np.concatenate(list(map(f, idx))) - left = df.set_index(icol).reindex(idx, level="jim") - right = df.iloc[i].set_index(icol) - tm.assert_frame_equal(left, right) - - @pytest.mark.parametrize( - "idx, indexer", - [ - [ - ["1st", "2nd", "3rd"], - [2, 3, 4, 0, 1, 8, 9, 5, 6, 7, 10, 11, 12, 13, 14, 18, 19, 15, 16, 17], - ], - [ - ["3rd", "2nd", "1st"], - [0, 1, 2, 3, 4, 10, 11, 12, 5, 6, 7, 8, 9, 15, 16, 17, 18, 19, 13, 14], - ], - [["2nd", "3rd"], [0, 1, 5, 6, 7, 10, 11, 12, 18, 19, 15, 16, 17]], - [["3rd", "1st"], [0, 1, 2, 3, 4, 10, 11, 12, 8, 9, 15, 16, 17, 13, 14]], - ], - ) - def test_reindex_level_verify_repeats(self, idx, indexer): - df = DataFrame( - { - "jim": ["mid"] * 5 + ["btm"] * 8 + ["top"] * 7, - "joe": ["3rd"] * 2 - + ["1st"] * 3 - + ["2nd"] * 3 - + ["1st"] * 2 - + ["3rd"] * 3 - + ["1st"] * 2 - + ["3rd"] * 3 - + ["2nd"] * 2, - # this needs to be jointly unique with jim and joe or - # reindexing will fail ~1.5% of the time, this works - # out to needing unique groups of same size as joe - "jolie": np.concatenate( - [ - np.random.default_rng(2).choice(1000, x, replace=False) - for x in [2, 3, 3, 2, 3, 2, 3, 2] - ] - ), - "joline": np.random.default_rng(2).standard_normal(20).round(3) * 10, - } - ) - icol = ["jim", "joe", "jolie"] - left = df.set_index(icol).reindex(idx, level="joe") - right = df.iloc[indexer].set_index(icol) - tm.assert_frame_equal(left, right) - - @pytest.mark.parametrize( - "idx, indexer, check_index_type", - [ - [list("abcde"), [3, 2, 1, 0, 5, 4, 8, 7, 6], True], - [list("abcd"), [3, 2, 1, 0, 5, 8, 7, 6], True], - [list("abc"), [3, 2, 1, 8, 7, 6], True], - [list("eca"), [1, 3, 4, 6, 8], True], - [list("edc"), [0, 1, 4, 5, 6], True], - [list("eadbc"), [3, 0, 2, 1, 4, 5, 8, 7, 6], True], - [list("edwq"), [0, 4, 5], True], - [list("wq"), [], False], - ], - ) - def test_reindex_level_verify(self, idx, indexer, check_index_type): - df = DataFrame( - { - "jim": list("B" * 4 + "A" * 2 + "C" * 3), - "joe": list("abcdeabcd")[::-1], - "jolie": [10, 20, 30] * 3, - "joline": np.random.default_rng(2).integers(0, 1000, 9), - } - ) - icol = ["jim", "joe", "jolie"] - left = df.set_index(icol).reindex(idx, level="joe") - right = df.iloc[indexer].set_index(icol) - tm.assert_frame_equal(left, right, check_index_type=check_index_type) - - def test_non_monotonic_reindex_methods(self): - dr = date_range("2013-08-01", periods=6, freq="B") - data = np.random.default_rng(2).standard_normal((6, 1)) - df = DataFrame(data, index=dr, columns=list("A")) - df_rev = DataFrame(data, index=dr[[3, 4, 5] + [0, 1, 2]], columns=list("A")) - # index is not monotonic increasing or decreasing - msg = "index must be monotonic increasing or decreasing" - with pytest.raises(ValueError, match=msg): - df_rev.reindex(df.index, method="pad") - with pytest.raises(ValueError, match=msg): - df_rev.reindex(df.index, method="ffill") - with pytest.raises(ValueError, match=msg): - df_rev.reindex(df.index, method="bfill") - with pytest.raises(ValueError, match=msg): - df_rev.reindex(df.index, method="nearest") - - def test_reindex_sparse(self): - # https://github.com/pandas-dev/pandas/issues/35286 - df = DataFrame( - {"A": [0, 1], "B": pd.array([0, 1], dtype=pd.SparseDtype("int64", 0))} - ) - result = df.reindex([0, 2]) - expected = DataFrame( - { - "A": [0.0, np.nan], - "B": pd.array([0.0, np.nan], dtype=pd.SparseDtype("float64", 0.0)), - }, - index=[0, 2], - ) - tm.assert_frame_equal(result, expected) - - def test_reindex(self, float_frame, using_copy_on_write): - datetime_series = tm.makeTimeSeries(nper=30) - - newFrame = float_frame.reindex(datetime_series.index) - - for col in newFrame.columns: - for idx, val in newFrame[col].items(): - if idx in float_frame.index: - if np.isnan(val): - assert np.isnan(float_frame[col][idx]) - else: - assert val == float_frame[col][idx] - else: - assert np.isnan(val) - - for col, series in newFrame.items(): - assert tm.equalContents(series.index, newFrame.index) - emptyFrame = float_frame.reindex(Index([])) - assert len(emptyFrame.index) == 0 - - # Cython code should be unit-tested directly - nonContigFrame = float_frame.reindex(datetime_series.index[::2]) - - for col in nonContigFrame.columns: - for idx, val in nonContigFrame[col].items(): - if idx in float_frame.index: - if np.isnan(val): - assert np.isnan(float_frame[col][idx]) - else: - assert val == float_frame[col][idx] - else: - assert np.isnan(val) - - for col, series in nonContigFrame.items(): - assert tm.equalContents(series.index, nonContigFrame.index) - - # corner cases - - # Same index, copies values but not index if copy=False - newFrame = float_frame.reindex(float_frame.index, copy=False) - if using_copy_on_write: - assert newFrame.index.is_(float_frame.index) - else: - assert newFrame.index is float_frame.index - - # length zero - newFrame = float_frame.reindex([]) - assert newFrame.empty - assert len(newFrame.columns) == len(float_frame.columns) - - # length zero with columns reindexed with non-empty index - newFrame = float_frame.reindex([]) - newFrame = newFrame.reindex(float_frame.index) - assert len(newFrame.index) == len(float_frame.index) - assert len(newFrame.columns) == len(float_frame.columns) - - # pass non-Index - newFrame = float_frame.reindex(list(datetime_series.index)) - expected = datetime_series.index._with_freq(None) - tm.assert_index_equal(newFrame.index, expected) - - # copy with no axes - result = float_frame.reindex() - tm.assert_frame_equal(result, float_frame) - assert result is not float_frame - - def test_reindex_nan(self): - df = DataFrame( - [[1, 2], [3, 5], [7, 11], [9, 23]], - index=[2, np.nan, 1, 5], - columns=["joe", "jim"], - ) - - i, j = [np.nan, 5, 5, np.nan, 1, 2, np.nan], [1, 3, 3, 1, 2, 0, 1] - tm.assert_frame_equal(df.reindex(i), df.iloc[j]) - - df.index = df.index.astype("object") - tm.assert_frame_equal(df.reindex(i), df.iloc[j], check_index_type=False) - - # GH10388 - df = DataFrame( - { - "other": ["a", "b", np.nan, "c"], - "date": ["2015-03-22", np.nan, "2012-01-08", np.nan], - "amount": [2, 3, 4, 5], - } - ) - - df["date"] = pd.to_datetime(df.date) - df["delta"] = (pd.to_datetime("2015-06-18") - df["date"]).shift(1) - - left = df.set_index(["delta", "other", "date"]).reset_index() - right = df.reindex(columns=["delta", "other", "date", "amount"]) - tm.assert_frame_equal(left, right) - - def test_reindex_name_remains(self): - s = Series(np.random.default_rng(2).random(10)) - df = DataFrame(s, index=np.arange(len(s))) - i = Series(np.arange(10), name="iname") - - df = df.reindex(i) - assert df.index.name == "iname" - - df = df.reindex(Index(np.arange(10), name="tmpname")) - assert df.index.name == "tmpname" - - s = Series(np.random.default_rng(2).random(10)) - df = DataFrame(s.T, index=np.arange(len(s))) - i = Series(np.arange(10), name="iname") - df = df.reindex(columns=i) - assert df.columns.name == "iname" - - def test_reindex_int(self, int_frame): - smaller = int_frame.reindex(int_frame.index[::2]) - - assert smaller["A"].dtype == np.int64 - - bigger = smaller.reindex(int_frame.index) - assert bigger["A"].dtype == np.float64 - - smaller = int_frame.reindex(columns=["A", "B"]) - assert smaller["A"].dtype == np.int64 - - def test_reindex_columns(self, float_frame): - new_frame = float_frame.reindex(columns=["A", "B", "E"]) - - tm.assert_series_equal(new_frame["B"], float_frame["B"]) - assert np.isnan(new_frame["E"]).all() - assert "C" not in new_frame - - # Length zero - new_frame = float_frame.reindex(columns=[]) - assert new_frame.empty - - def test_reindex_columns_method(self): - # GH 14992, reindexing over columns ignored method - df = DataFrame( - data=[[11, 12, 13], [21, 22, 23], [31, 32, 33]], - index=[1, 2, 4], - columns=[1, 2, 4], - dtype=float, - ) - - # default method - result = df.reindex(columns=range(6)) - expected = DataFrame( - data=[ - [np.nan, 11, 12, np.nan, 13, np.nan], - [np.nan, 21, 22, np.nan, 23, np.nan], - [np.nan, 31, 32, np.nan, 33, np.nan], - ], - index=[1, 2, 4], - columns=range(6), - dtype=float, - ) - tm.assert_frame_equal(result, expected) - - # method='ffill' - result = df.reindex(columns=range(6), method="ffill") - expected = DataFrame( - data=[ - [np.nan, 11, 12, 12, 13, 13], - [np.nan, 21, 22, 22, 23, 23], - [np.nan, 31, 32, 32, 33, 33], - ], - index=[1, 2, 4], - columns=range(6), - dtype=float, - ) - tm.assert_frame_equal(result, expected) - - # method='bfill' - result = df.reindex(columns=range(6), method="bfill") - expected = DataFrame( - data=[ - [11, 11, 12, 13, 13, np.nan], - [21, 21, 22, 23, 23, np.nan], - [31, 31, 32, 33, 33, np.nan], - ], - index=[1, 2, 4], - columns=range(6), - dtype=float, - ) - tm.assert_frame_equal(result, expected) - - def test_reindex_axes(self): - # GH 3317, reindexing by both axes loses freq of the index - df = DataFrame( - np.ones((3, 3)), - index=[datetime(2012, 1, 1), datetime(2012, 1, 2), datetime(2012, 1, 3)], - columns=["a", "b", "c"], - ) - time_freq = date_range("2012-01-01", "2012-01-03", freq="d") - some_cols = ["a", "b"] - - index_freq = df.reindex(index=time_freq).index.freq - both_freq = df.reindex(index=time_freq, columns=some_cols).index.freq - seq_freq = df.reindex(index=time_freq).reindex(columns=some_cols).index.freq - assert index_freq == both_freq - assert index_freq == seq_freq - - def test_reindex_fill_value(self): - df = DataFrame(np.random.default_rng(2).standard_normal((10, 4))) - - # axis=0 - result = df.reindex(list(range(15))) - assert np.isnan(result.values[-5:]).all() - - result = df.reindex(range(15), fill_value=0) - expected = df.reindex(range(15)).fillna(0) - tm.assert_frame_equal(result, expected) - - # axis=1 - result = df.reindex(columns=range(5), fill_value=0.0) - expected = df.copy() - expected[4] = 0.0 - tm.assert_frame_equal(result, expected) - - result = df.reindex(columns=range(5), fill_value=0) - expected = df.copy() - expected[4] = 0 - tm.assert_frame_equal(result, expected) - - result = df.reindex(columns=range(5), fill_value="foo") - expected = df.copy() - expected[4] = "foo" - tm.assert_frame_equal(result, expected) - - # other dtypes - df["foo"] = "foo" - result = df.reindex(range(15), fill_value=0) - expected = df.reindex(range(15)).fillna(0) - tm.assert_frame_equal(result, expected) - - def test_reindex_uint_dtypes_fill_value(self, any_unsigned_int_numpy_dtype): - # GH#48184 - df = DataFrame({"a": [1, 2], "b": [1, 2]}, dtype=any_unsigned_int_numpy_dtype) - result = df.reindex(columns=list("abcd"), index=[0, 1, 2, 3], fill_value=10) - expected = DataFrame( - {"a": [1, 2, 10, 10], "b": [1, 2, 10, 10], "c": 10, "d": 10}, - dtype=any_unsigned_int_numpy_dtype, - ) - tm.assert_frame_equal(result, expected) - - def test_reindex_single_column_ea_index_and_columns(self, any_numeric_ea_dtype): - # GH#48190 - df = DataFrame({"a": [1, 2]}, dtype=any_numeric_ea_dtype) - result = df.reindex(columns=list("ab"), index=[0, 1, 2], fill_value=10) - expected = DataFrame( - {"a": Series([1, 2, 10], dtype=any_numeric_ea_dtype), "b": 10} - ) - tm.assert_frame_equal(result, expected) - - def test_reindex_dups(self): - # GH4746, reindex on duplicate index error messages - arr = np.random.default_rng(2).standard_normal(10) - df = DataFrame(arr, index=[1, 2, 3, 4, 5, 1, 2, 3, 4, 5]) - - # set index is ok - result = df.copy() - result.index = list(range(len(df))) - expected = DataFrame(arr, index=list(range(len(df)))) - tm.assert_frame_equal(result, expected) - - # reindex fails - msg = "cannot reindex on an axis with duplicate labels" - with pytest.raises(ValueError, match=msg): - df.reindex(index=list(range(len(df)))) - - def test_reindex_with_duplicate_columns(self): - # reindex is invalid! - df = DataFrame( - [[1, 5, 7.0], [1, 5, 7.0], [1, 5, 7.0]], columns=["bar", "a", "a"] - ) - msg = "cannot reindex on an axis with duplicate labels" - with pytest.raises(ValueError, match=msg): - df.reindex(columns=["bar"]) - with pytest.raises(ValueError, match=msg): - df.reindex(columns=["bar", "foo"]) - - def test_reindex_axis_style(self): - # https://github.com/pandas-dev/pandas/issues/12392 - df = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) - expected = DataFrame( - {"A": [1, 2, np.nan], "B": [4, 5, np.nan]}, index=[0, 1, 3] - ) - result = df.reindex([0, 1, 3]) - tm.assert_frame_equal(result, expected) - - result = df.reindex([0, 1, 3], axis=0) - tm.assert_frame_equal(result, expected) - - result = df.reindex([0, 1, 3], axis="index") - tm.assert_frame_equal(result, expected) - - def test_reindex_positional_raises(self): - # https://github.com/pandas-dev/pandas/issues/12392 - # Enforced in 2.0 - df = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) - msg = r"reindex\(\) takes from 1 to 2 positional arguments but 3 were given" - with pytest.raises(TypeError, match=msg): - df.reindex([0, 1], ["A", "B", "C"]) - - def test_reindex_axis_style_raises(self): - # https://github.com/pandas-dev/pandas/issues/12392 - df = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) - with pytest.raises(TypeError, match="Cannot specify both 'axis'"): - df.reindex([0, 1], columns=["A"], axis=1) - - with pytest.raises(TypeError, match="Cannot specify both 'axis'"): - df.reindex([0, 1], columns=["A"], axis="index") - - with pytest.raises(TypeError, match="Cannot specify both 'axis'"): - df.reindex(index=[0, 1], axis="index") - - with pytest.raises(TypeError, match="Cannot specify both 'axis'"): - df.reindex(index=[0, 1], axis="columns") - - with pytest.raises(TypeError, match="Cannot specify both 'axis'"): - df.reindex(columns=[0, 1], axis="columns") - - with pytest.raises(TypeError, match="Cannot specify both 'axis'"): - df.reindex(index=[0, 1], columns=[0, 1], axis="columns") - - with pytest.raises(TypeError, match="Cannot specify all"): - df.reindex(labels=[0, 1], index=[0], columns=["A"]) - - # Mixing styles - with pytest.raises(TypeError, match="Cannot specify both 'axis'"): - df.reindex(index=[0, 1], axis="index") - - with pytest.raises(TypeError, match="Cannot specify both 'axis'"): - df.reindex(index=[0, 1], axis="columns") - - # Duplicates - with pytest.raises(TypeError, match="multiple values"): - df.reindex([0, 1], labels=[0, 1]) - - def test_reindex_single_named_indexer(self): - # https://github.com/pandas-dev/pandas/issues/12392 - df = DataFrame({"A": [1, 2, 3], "B": [1, 2, 3]}) - result = df.reindex([0, 1], columns=["A"]) - expected = DataFrame({"A": [1, 2]}) - tm.assert_frame_equal(result, expected) - - def test_reindex_api_equivalence(self): - # https://github.com/pandas-dev/pandas/issues/12392 - # equivalence of the labels/axis and index/columns API's - df = DataFrame( - [[1, 2, 3], [3, 4, 5], [5, 6, 7]], - index=["a", "b", "c"], - columns=["d", "e", "f"], - ) - - res1 = df.reindex(["b", "a"]) - res2 = df.reindex(index=["b", "a"]) - res3 = df.reindex(labels=["b", "a"]) - res4 = df.reindex(labels=["b", "a"], axis=0) - res5 = df.reindex(["b", "a"], axis=0) - for res in [res2, res3, res4, res5]: - tm.assert_frame_equal(res1, res) - - res1 = df.reindex(columns=["e", "d"]) - res2 = df.reindex(["e", "d"], axis=1) - res3 = df.reindex(labels=["e", "d"], axis=1) - for res in [res2, res3]: - tm.assert_frame_equal(res1, res) - - res1 = df.reindex(index=["b", "a"], columns=["e", "d"]) - res2 = df.reindex(columns=["e", "d"], index=["b", "a"]) - res3 = df.reindex(labels=["b", "a"], axis=0).reindex(labels=["e", "d"], axis=1) - for res in [res2, res3]: - tm.assert_frame_equal(res1, res) - - def test_reindex_boolean(self): - frame = DataFrame( - np.ones((10, 2), dtype=bool), index=np.arange(0, 20, 2), columns=[0, 2] - ) - - reindexed = frame.reindex(np.arange(10)) - assert reindexed.values.dtype == np.object_ - assert isna(reindexed[0][1]) - - reindexed = frame.reindex(columns=range(3)) - assert reindexed.values.dtype == np.object_ - assert isna(reindexed[1]).all() - - def test_reindex_objects(self, float_string_frame): - reindexed = float_string_frame.reindex(columns=["foo", "A", "B"]) - assert "foo" in reindexed - - reindexed = float_string_frame.reindex(columns=["A", "B"]) - assert "foo" not in reindexed - - def test_reindex_corner(self, int_frame): - index = Index(["a", "b", "c"]) - dm = DataFrame({}).reindex(index=[1, 2, 3]) - reindexed = dm.reindex(columns=index) - tm.assert_index_equal(reindexed.columns, index) - - # ints are weird - smaller = int_frame.reindex(columns=["A", "B", "E"]) - assert smaller["E"].dtype == np.float64 - - def test_reindex_with_nans(self): - df = DataFrame( - [[1, 2], [3, 4], [np.nan, np.nan], [7, 8], [9, 10]], - columns=["a", "b"], - index=[100.0, 101.0, np.nan, 102.0, 103.0], - ) - - result = df.reindex(index=[101.0, 102.0, 103.0]) - expected = df.iloc[[1, 3, 4]] - tm.assert_frame_equal(result, expected) - - result = df.reindex(index=[103.0]) - expected = df.iloc[[4]] - tm.assert_frame_equal(result, expected) - - result = df.reindex(index=[101.0]) - expected = df.iloc[[1]] - tm.assert_frame_equal(result, expected) - - def test_reindex_multi(self): - df = DataFrame(np.random.default_rng(2).standard_normal((3, 3))) - - result = df.reindex(index=range(4), columns=range(4)) - expected = df.reindex(list(range(4))).reindex(columns=range(4)) - - tm.assert_frame_equal(result, expected) - - df = DataFrame(np.random.default_rng(2).integers(0, 10, (3, 3))) - - result = df.reindex(index=range(4), columns=range(4)) - expected = df.reindex(list(range(4))).reindex(columns=range(4)) - - tm.assert_frame_equal(result, expected) - - df = DataFrame(np.random.default_rng(2).integers(0, 10, (3, 3))) - - result = df.reindex(index=range(2), columns=range(2)) - expected = df.reindex(range(2)).reindex(columns=range(2)) - - tm.assert_frame_equal(result, expected) - - df = DataFrame( - np.random.default_rng(2).standard_normal((5, 3)) + 1j, - columns=["a", "b", "c"], - ) - - result = df.reindex(index=[0, 1], columns=["a", "b"]) - expected = df.reindex([0, 1]).reindex(columns=["a", "b"]) - - tm.assert_frame_equal(result, expected) - - def test_reindex_multi_categorical_time(self): - # https://github.com/pandas-dev/pandas/issues/21390 - midx = MultiIndex.from_product( - [ - Categorical(["a", "b", "c"]), - Categorical(date_range("2012-01-01", periods=3, freq="H")), - ] - ) - df = DataFrame({"a": range(len(midx))}, index=midx) - df2 = df.iloc[[0, 1, 2, 3, 4, 5, 6, 8]] - - result = df2.reindex(midx) - expected = DataFrame({"a": [0, 1, 2, 3, 4, 5, 6, np.nan, 8]}, index=midx) - tm.assert_frame_equal(result, expected) - - def test_reindex_with_categoricalindex(self): - df = DataFrame( - { - "A": np.arange(3, dtype="int64"), - }, - index=CategoricalIndex(list("abc"), dtype=CDT(list("cabe")), name="B"), - ) - - # reindexing - # convert to a regular index - result = df.reindex(["a", "b", "e"]) - expected = DataFrame({"A": [0, 1, np.nan], "B": Series(list("abe"))}).set_index( - "B" - ) - tm.assert_frame_equal(result, expected, check_index_type=True) - - result = df.reindex(["a", "b"]) - expected = DataFrame({"A": [0, 1], "B": Series(list("ab"))}).set_index("B") - tm.assert_frame_equal(result, expected, check_index_type=True) - - result = df.reindex(["e"]) - expected = DataFrame({"A": [np.nan], "B": Series(["e"])}).set_index("B") - tm.assert_frame_equal(result, expected, check_index_type=True) - - result = df.reindex(["d"]) - expected = DataFrame({"A": [np.nan], "B": Series(["d"])}).set_index("B") - tm.assert_frame_equal(result, expected, check_index_type=True) - - # since we are actually reindexing with a Categorical - # then return a Categorical - cats = list("cabe") - - result = df.reindex(Categorical(["a", "e"], categories=cats)) - expected = DataFrame( - {"A": [0, np.nan], "B": Series(list("ae")).astype(CDT(cats))} - ).set_index("B") - tm.assert_frame_equal(result, expected, check_index_type=True) - - result = df.reindex(Categorical(["a"], categories=cats)) - expected = DataFrame( - {"A": [0], "B": Series(list("a")).astype(CDT(cats))} - ).set_index("B") - tm.assert_frame_equal(result, expected, check_index_type=True) - - result = df.reindex(["a", "b", "e"]) - expected = DataFrame({"A": [0, 1, np.nan], "B": Series(list("abe"))}).set_index( - "B" - ) - tm.assert_frame_equal(result, expected, check_index_type=True) - - result = df.reindex(["a", "b"]) - expected = DataFrame({"A": [0, 1], "B": Series(list("ab"))}).set_index("B") - tm.assert_frame_equal(result, expected, check_index_type=True) - - result = df.reindex(["e"]) - expected = DataFrame({"A": [np.nan], "B": Series(["e"])}).set_index("B") - tm.assert_frame_equal(result, expected, check_index_type=True) - - # give back the type of categorical that we received - result = df.reindex(Categorical(["a", "e"], categories=cats, ordered=True)) - expected = DataFrame( - {"A": [0, np.nan], "B": Series(list("ae")).astype(CDT(cats, ordered=True))} - ).set_index("B") - tm.assert_frame_equal(result, expected, check_index_type=True) - - result = df.reindex(Categorical(["a", "d"], categories=["a", "d"])) - expected = DataFrame( - {"A": [0, np.nan], "B": Series(list("ad")).astype(CDT(["a", "d"]))} - ).set_index("B") - tm.assert_frame_equal(result, expected, check_index_type=True) - - df2 = DataFrame( - { - "A": np.arange(6, dtype="int64"), - }, - index=CategoricalIndex(list("aabbca"), dtype=CDT(list("cabe")), name="B"), - ) - # passed duplicate indexers are not allowed - msg = "cannot reindex on an axis with duplicate labels" - with pytest.raises(ValueError, match=msg): - df2.reindex(["a", "b"]) - - # args NotImplemented ATM - msg = r"argument {} is not implemented for CategoricalIndex\.reindex" - with pytest.raises(NotImplementedError, match=msg.format("method")): - df.reindex(["a"], method="ffill") - with pytest.raises(NotImplementedError, match=msg.format("level")): - df.reindex(["a"], level=1) - with pytest.raises(NotImplementedError, match=msg.format("limit")): - df.reindex(["a"], limit=2) - - def test_reindex_signature(self): - sig = inspect.signature(DataFrame.reindex) - parameters = set(sig.parameters) - assert parameters == { - "self", - "labels", - "index", - "columns", - "axis", - "limit", - "copy", - "level", - "method", - "fill_value", - "tolerance", - } - - def test_reindex_multiindex_ffill_added_rows(self): - # GH#23693 - # reindex added rows with nan values even when fill method was specified - mi = MultiIndex.from_tuples([("a", "b"), ("d", "e")]) - df = DataFrame([[0, 7], [3, 4]], index=mi, columns=["x", "y"]) - mi2 = MultiIndex.from_tuples([("a", "b"), ("d", "e"), ("h", "i")]) - result = df.reindex(mi2, axis=0, method="ffill") - expected = DataFrame([[0, 7], [3, 4], [3, 4]], index=mi2, columns=["x", "y"]) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize( - "kwargs", - [ - {"method": "pad", "tolerance": timedelta(seconds=9)}, - {"method": "backfill", "tolerance": timedelta(seconds=9)}, - {"method": "nearest"}, - {"method": None}, - ], - ) - def test_reindex_empty_frame(self, kwargs): - # GH#27315 - idx = date_range(start="2020", freq="30s", periods=3) - df = DataFrame([], index=Index([], name="time"), columns=["a"]) - result = df.reindex(idx, **kwargs) - expected = DataFrame({"a": [np.nan] * 3}, index=idx, dtype=object) - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize( - "src_idx", - [ - Index([]), - CategoricalIndex([]), - ], - ) - @pytest.mark.parametrize( - "cat_idx", - [ - # No duplicates - Index([]), - CategoricalIndex([]), - Index(["A", "B"]), - CategoricalIndex(["A", "B"]), - # Duplicates: GH#38906 - Index(["A", "A"]), - CategoricalIndex(["A", "A"]), - ], - ) - def test_reindex_empty(self, src_idx, cat_idx): - df = DataFrame(columns=src_idx, index=["K"], dtype="f8") - - result = df.reindex(columns=cat_idx) - expected = DataFrame(index=["K"], columns=cat_idx, dtype="f8") - tm.assert_frame_equal(result, expected) - - @pytest.mark.parametrize("dtype", ["m8[ns]", "M8[ns]"]) - def test_reindex_datetimelike_to_object(self, dtype): - # GH#39755 dont cast dt64/td64 to ints - mi = MultiIndex.from_product([list("ABCDE"), range(2)]) - - dti = date_range("2016-01-01", periods=10) - fv = np.timedelta64("NaT", "ns") - if dtype == "m8[ns]": - dti = dti - dti[0] - fv = np.datetime64("NaT", "ns") - - ser = Series(dti, index=mi) - ser[::3] = pd.NaT - - df = ser.unstack() - - index = df.index.append(Index([1])) - columns = df.columns.append(Index(["foo"])) - - res = df.reindex(index=index, columns=columns, fill_value=fv) - - expected = DataFrame( - { - 0: df[0].tolist() + [fv], - 1: df[1].tolist() + [fv], - "foo": np.array(["NaT"] * 6, dtype=fv.dtype), - }, - index=index, - ) - assert (res.dtypes[[0, 1]] == object).all() - assert res.iloc[0, 0] is pd.NaT - assert res.iloc[-1, 0] is fv - assert res.iloc[-1, 1] is fv - tm.assert_frame_equal(res, expected) - - @pytest.mark.parametrize( - "index_df,index_res,index_exp", - [ - ( - CategoricalIndex([], categories=["A"]), - Index(["A"]), - Index(["A"]), - ), - ( - CategoricalIndex([], categories=["A"]), - Index(["B"]), - Index(["B"]), - ), - ( - CategoricalIndex([], categories=["A"]), - CategoricalIndex(["A"]), - CategoricalIndex(["A"]), - ), - ( - CategoricalIndex([], categories=["A"]), - CategoricalIndex(["B"]), - CategoricalIndex(["B"]), - ), - ], - ) - def test_reindex_not_category(self, index_df, index_res, index_exp): - # GH#28690 - df = DataFrame(index=index_df) - result = df.reindex(index=index_res) - expected = DataFrame(index=index_exp) - tm.assert_frame_equal(result, expected) - - def test_invalid_method(self): - df = DataFrame({"A": [1, np.nan, 2]}) - - msg = "Invalid fill method" - with pytest.raises(ValueError, match=msg): - df.reindex([1, 0, 2], method="asfreq") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/scalar/timestamp/test_timestamp.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/scalar/timestamp/test_timestamp.py deleted file mode 100644 index ded32a77cbaf7f11735d92a7ba2c8db48a30e3a5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/scalar/timestamp/test_timestamp.py +++ /dev/null @@ -1,1186 +0,0 @@ -""" test the scalar Timestamp """ - -import calendar -from datetime import ( - datetime, - timedelta, - timezone, -) -import locale -import time -import unicodedata - -from dateutil.tz import ( - tzlocal, - tzutc, -) -from hypothesis import ( - given, - strategies as st, -) -import numpy as np -import pytest -import pytz -from pytz import utc - -from pandas._libs.tslibs.dtypes import NpyDatetimeUnit -from pandas._libs.tslibs.timezones import ( - dateutil_gettz as gettz, - get_timezone, - maybe_get_tz, - tz_compare, -) -from pandas.compat import IS64 -from pandas.errors import OutOfBoundsDatetime -import pandas.util._test_decorators as td - -from pandas import ( - NaT, - Timedelta, - Timestamp, -) -import pandas._testing as tm - -from pandas.tseries import offsets -from pandas.tseries.frequencies import to_offset - - -class TestTimestampProperties: - def test_properties_business(self): - freq = to_offset("B") - - ts = Timestamp("2017-10-01") - assert ts.dayofweek == 6 - assert ts.day_of_week == 6 - assert ts.is_month_start # not a weekday - assert not freq.is_month_start(ts) - assert freq.is_month_start(ts + Timedelta(days=1)) - assert not freq.is_quarter_start(ts) - assert freq.is_quarter_start(ts + Timedelta(days=1)) - - ts = Timestamp("2017-09-30") - assert ts.dayofweek == 5 - assert ts.day_of_week == 5 - assert ts.is_month_end - assert not freq.is_month_end(ts) - assert freq.is_month_end(ts - Timedelta(days=1)) - assert ts.is_quarter_end - assert not freq.is_quarter_end(ts) - assert freq.is_quarter_end(ts - Timedelta(days=1)) - - @pytest.mark.parametrize( - "attr, expected", - [ - ["year", 2014], - ["month", 12], - ["day", 31], - ["hour", 23], - ["minute", 59], - ["second", 0], - ["microsecond", 0], - ["nanosecond", 0], - ["dayofweek", 2], - ["day_of_week", 2], - ["quarter", 4], - ["dayofyear", 365], - ["day_of_year", 365], - ["week", 1], - ["daysinmonth", 31], - ], - ) - @pytest.mark.parametrize("tz", [None, "US/Eastern"]) - def test_fields(self, attr, expected, tz): - # GH 10050 - # GH 13303 - ts = Timestamp("2014-12-31 23:59:00", tz=tz) - result = getattr(ts, attr) - # that we are int like - assert isinstance(result, int) - assert result == expected - - @pytest.mark.parametrize("tz", [None, "US/Eastern"]) - def test_millisecond_raises(self, tz): - ts = Timestamp("2014-12-31 23:59:00", tz=tz) - msg = "'Timestamp' object has no attribute 'millisecond'" - with pytest.raises(AttributeError, match=msg): - ts.millisecond - - @pytest.mark.parametrize( - "start", ["is_month_start", "is_quarter_start", "is_year_start"] - ) - @pytest.mark.parametrize("tz", [None, "US/Eastern"]) - def test_is_start(self, start, tz): - ts = Timestamp("2014-01-01 00:00:00", tz=tz) - assert getattr(ts, start) - - @pytest.mark.parametrize("end", ["is_month_end", "is_year_end", "is_quarter_end"]) - @pytest.mark.parametrize("tz", [None, "US/Eastern"]) - def test_is_end(self, end, tz): - ts = Timestamp("2014-12-31 23:59:59", tz=tz) - assert getattr(ts, end) - - # GH 12806 - @pytest.mark.parametrize( - "data", - [Timestamp("2017-08-28 23:00:00"), Timestamp("2017-08-28 23:00:00", tz="EST")], - ) - # error: Unsupported operand types for + ("List[None]" and "List[str]") - @pytest.mark.parametrize( - "time_locale", [None] + tm.get_locales() # type: ignore[operator] - ) - def test_names(self, data, time_locale): - # GH 17354 - # Test .day_name(), .month_name - if time_locale is None: - expected_day = "Monday" - expected_month = "August" - else: - with tm.set_locale(time_locale, locale.LC_TIME): - expected_day = calendar.day_name[0].capitalize() - expected_month = calendar.month_name[8].capitalize() - - result_day = data.day_name(time_locale) - result_month = data.month_name(time_locale) - - # Work around https://github.com/pandas-dev/pandas/issues/22342 - # different normalizations - expected_day = unicodedata.normalize("NFD", expected_day) - expected_month = unicodedata.normalize("NFD", expected_month) - - result_day = unicodedata.normalize("NFD", result_day) - result_month = unicodedata.normalize("NFD", result_month) - - assert result_day == expected_day - assert result_month == expected_month - - # Test NaT - nan_ts = Timestamp(NaT) - assert np.isnan(nan_ts.day_name(time_locale)) - assert np.isnan(nan_ts.month_name(time_locale)) - - def test_is_leap_year(self, tz_naive_fixture): - tz = tz_naive_fixture - if not IS64 and tz == tzlocal(): - # https://github.com/dateutil/dateutil/issues/197 - pytest.skip( - "tzlocal() on a 32 bit platform causes internal overflow errors" - ) - # GH 13727 - dt = Timestamp("2000-01-01 00:00:00", tz=tz) - assert dt.is_leap_year - assert isinstance(dt.is_leap_year, bool) - - dt = Timestamp("1999-01-01 00:00:00", tz=tz) - assert not dt.is_leap_year - - dt = Timestamp("2004-01-01 00:00:00", tz=tz) - assert dt.is_leap_year - - dt = Timestamp("2100-01-01 00:00:00", tz=tz) - assert not dt.is_leap_year - - def test_woy_boundary(self): - # make sure weeks at year boundaries are correct - d = datetime(2013, 12, 31) - result = Timestamp(d).week - expected = 1 # ISO standard - assert result == expected - - d = datetime(2008, 12, 28) - result = Timestamp(d).week - expected = 52 # ISO standard - assert result == expected - - d = datetime(2009, 12, 31) - result = Timestamp(d).week - expected = 53 # ISO standard - assert result == expected - - d = datetime(2010, 1, 1) - result = Timestamp(d).week - expected = 53 # ISO standard - assert result == expected - - d = datetime(2010, 1, 3) - result = Timestamp(d).week - expected = 53 # ISO standard - assert result == expected - - result = np.array( - [ - Timestamp(datetime(*args)).week - for args in [(2000, 1, 1), (2000, 1, 2), (2005, 1, 1), (2005, 1, 2)] - ] - ) - assert (result == [52, 52, 53, 53]).all() - - def test_resolution(self): - # GH#21336, GH#21365 - dt = Timestamp("2100-01-01 00:00:00.000000000") - assert dt.resolution == Timedelta(nanoseconds=1) - - # Check that the attribute is available on the class, mirroring - # the stdlib datetime behavior - assert Timestamp.resolution == Timedelta(nanoseconds=1) - - assert dt.as_unit("us").resolution == Timedelta(microseconds=1) - assert dt.as_unit("ms").resolution == Timedelta(milliseconds=1) - assert dt.as_unit("s").resolution == Timedelta(seconds=1) - - @pytest.mark.parametrize( - "date_string, expected", - [ - ("0000-2-29", 1), - ("0000-3-1", 2), - ("1582-10-14", 3), - ("-0040-1-1", 4), - ("2023-06-18", 6), - ], - ) - def test_dow_historic(self, date_string, expected): - # GH 53738 - ts = Timestamp(date_string) - dow = ts.weekday() - assert dow == expected - - @given( - ts=st.datetimes(), - sign=st.sampled_from(["-", ""]), - ) - def test_dow_parametric(self, ts, sign): - # GH 53738 - ts = ( - f"{sign}{str(ts.year).zfill(4)}" - f"-{str(ts.month).zfill(2)}" - f"-{str(ts.day).zfill(2)}" - ) - result = Timestamp(ts).weekday() - expected = ( - (np.datetime64(ts) - np.datetime64("1970-01-01")).astype("int64") - 4 - ) % 7 - assert result == expected - - -class TestTimestamp: - def test_default_to_stdlib_utc(self): - assert Timestamp.utcnow().tz is timezone.utc - assert Timestamp.now("UTC").tz is timezone.utc - assert Timestamp("2016-01-01", tz="UTC").tz is timezone.utc - - def test_tz(self): - tstr = "2014-02-01 09:00" - ts = Timestamp(tstr) - local = ts.tz_localize("Asia/Tokyo") - assert local.hour == 9 - assert local == Timestamp(tstr, tz="Asia/Tokyo") - conv = local.tz_convert("US/Eastern") - assert conv == Timestamp("2014-01-31 19:00", tz="US/Eastern") - assert conv.hour == 19 - - # preserves nanosecond - ts = Timestamp(tstr) + offsets.Nano(5) - local = ts.tz_localize("Asia/Tokyo") - assert local.hour == 9 - assert local.nanosecond == 5 - conv = local.tz_convert("US/Eastern") - assert conv.nanosecond == 5 - assert conv.hour == 19 - - def test_utc_z_designator(self): - assert get_timezone(Timestamp("2014-11-02 01:00Z").tzinfo) is timezone.utc - - def test_asm8(self): - ns = [Timestamp.min._value, Timestamp.max._value, 1000] - - for n in ns: - assert ( - Timestamp(n).asm8.view("i8") == np.datetime64(n, "ns").view("i8") == n - ) - - assert Timestamp("nat").asm8.view("i8") == np.datetime64("nat", "ns").view("i8") - - def test_class_ops_pytz(self): - def compare(x, y): - assert int((Timestamp(x)._value - Timestamp(y)._value) / 1e9) == 0 - - compare(Timestamp.now(), datetime.now()) - compare(Timestamp.now("UTC"), datetime.now(pytz.timezone("UTC"))) - compare(Timestamp.utcnow(), datetime.now(timezone.utc)) - compare(Timestamp.today(), datetime.today()) - current_time = calendar.timegm(datetime.now().utctimetuple()) - - ts_utc = Timestamp.utcfromtimestamp(current_time) - assert ts_utc.timestamp() == current_time - compare( - Timestamp.fromtimestamp(current_time), datetime.fromtimestamp(current_time) - ) - compare( - # Support tz kwarg in Timestamp.fromtimestamp - Timestamp.fromtimestamp(current_time, "UTC"), - datetime.fromtimestamp(current_time, utc), - ) - compare( - # Support tz kwarg in Timestamp.fromtimestamp - Timestamp.fromtimestamp(current_time, tz="UTC"), - datetime.fromtimestamp(current_time, utc), - ) - - date_component = datetime.now(timezone.utc) - time_component = (date_component + timedelta(minutes=10)).time() - compare( - Timestamp.combine(date_component, time_component), - datetime.combine(date_component, time_component), - ) - - def test_class_ops_dateutil(self): - def compare(x, y): - assert ( - int( - np.round(Timestamp(x)._value / 1e9) - - np.round(Timestamp(y)._value / 1e9) - ) - == 0 - ) - - compare(Timestamp.now(), datetime.now()) - compare(Timestamp.now("UTC"), datetime.now(tzutc())) - compare(Timestamp.utcnow(), datetime.now(timezone.utc)) - compare(Timestamp.today(), datetime.today()) - current_time = calendar.timegm(datetime.now().utctimetuple()) - - ts_utc = Timestamp.utcfromtimestamp(current_time) - assert ts_utc.timestamp() == current_time - - compare( - Timestamp.fromtimestamp(current_time), datetime.fromtimestamp(current_time) - ) - - date_component = datetime.now(timezone.utc) - time_component = (date_component + timedelta(minutes=10)).time() - compare( - Timestamp.combine(date_component, time_component), - datetime.combine(date_component, time_component), - ) - - def test_basics_nanos(self): - val = np.int64(946_684_800_000_000_000).view("M8[ns]") - stamp = Timestamp(val.view("i8") + 500) - assert stamp.year == 2000 - assert stamp.month == 1 - assert stamp.microsecond == 0 - assert stamp.nanosecond == 500 - - # GH 14415 - val = np.iinfo(np.int64).min + 80_000_000_000_000 - stamp = Timestamp(val) - assert stamp.year == 1677 - assert stamp.month == 9 - assert stamp.day == 21 - assert stamp.microsecond == 145224 - assert stamp.nanosecond == 192 - - @pytest.mark.parametrize( - "value, check_kwargs", - [ - [946688461000000000, {}], - [946688461000000000 / 1000, {"unit": "us"}], - [946688461000000000 / 1_000_000, {"unit": "ms"}], - [946688461000000000 / 1_000_000_000, {"unit": "s"}], - [10957, {"unit": "D", "h": 0}], - [ - (946688461000000000 + 500000) / 1000000000, - {"unit": "s", "us": 499, "ns": 964}, - ], - [ - (946688461000000000 + 500000000) / 1000000000, - {"unit": "s", "us": 500000}, - ], - [(946688461000000000 + 500000) / 1000000, {"unit": "ms", "us": 500}], - [(946688461000000000 + 500000) / 1000, {"unit": "us", "us": 500}], - [(946688461000000000 + 500000000) / 1000000, {"unit": "ms", "us": 500000}], - [946688461000000000 / 1000.0 + 5, {"unit": "us", "us": 5}], - [946688461000000000 / 1000.0 + 5000, {"unit": "us", "us": 5000}], - [946688461000000000 / 1000000.0 + 0.5, {"unit": "ms", "us": 500}], - [946688461000000000 / 1000000.0 + 0.005, {"unit": "ms", "us": 5, "ns": 5}], - [946688461000000000 / 1000000000.0 + 0.5, {"unit": "s", "us": 500000}], - [10957 + 0.5, {"unit": "D", "h": 12}], - ], - ) - def test_unit(self, value, check_kwargs): - def check(value, unit=None, h=1, s=1, us=0, ns=0): - stamp = Timestamp(value, unit=unit) - assert stamp.year == 2000 - assert stamp.month == 1 - assert stamp.day == 1 - assert stamp.hour == h - if unit != "D": - assert stamp.minute == 1 - assert stamp.second == s - assert stamp.microsecond == us - else: - assert stamp.minute == 0 - assert stamp.second == 0 - assert stamp.microsecond == 0 - assert stamp.nanosecond == ns - - check(value, **check_kwargs) - - def test_roundtrip(self): - # test value to string and back conversions - # further test accessors - base = Timestamp("20140101 00:00:00").as_unit("ns") - - result = Timestamp(base._value + Timedelta("5ms")._value) - assert result == Timestamp(f"{base}.005000") - assert result.microsecond == 5000 - - result = Timestamp(base._value + Timedelta("5us")._value) - assert result == Timestamp(f"{base}.000005") - assert result.microsecond == 5 - - result = Timestamp(base._value + Timedelta("5ns")._value) - assert result == Timestamp(f"{base}.000000005") - assert result.nanosecond == 5 - assert result.microsecond == 0 - - result = Timestamp(base._value + Timedelta("6ms 5us")._value) - assert result == Timestamp(f"{base}.006005") - assert result.microsecond == 5 + 6 * 1000 - - result = Timestamp(base._value + Timedelta("200ms 5us")._value) - assert result == Timestamp(f"{base}.200005") - assert result.microsecond == 5 + 200 * 1000 - - def test_hash_equivalent(self): - d = {datetime(2011, 1, 1): 5} - stamp = Timestamp(datetime(2011, 1, 1)) - assert d[stamp] == 5 - - @pytest.mark.parametrize( - "timezone, year, month, day, hour", - [["America/Chicago", 2013, 11, 3, 1], ["America/Santiago", 2021, 4, 3, 23]], - ) - def test_hash_timestamp_with_fold(self, timezone, year, month, day, hour): - # see gh-33931 - test_timezone = gettz(timezone) - transition_1 = Timestamp( - year=year, - month=month, - day=day, - hour=hour, - minute=0, - fold=0, - tzinfo=test_timezone, - ) - transition_2 = Timestamp( - year=year, - month=month, - day=day, - hour=hour, - minute=0, - fold=1, - tzinfo=test_timezone, - ) - assert hash(transition_1) == hash(transition_2) - - -class TestTimestampNsOperations: - def test_nanosecond_string_parsing(self): - ts = Timestamp("2013-05-01 07:15:45.123456789") - # GH 7878 - expected_repr = "2013-05-01 07:15:45.123456789" - expected_value = 1_367_392_545_123_456_789 - assert ts._value == expected_value - assert expected_repr in repr(ts) - - ts = Timestamp("2013-05-01 07:15:45.123456789+09:00", tz="Asia/Tokyo") - assert ts._value == expected_value - 9 * 3600 * 1_000_000_000 - assert expected_repr in repr(ts) - - ts = Timestamp("2013-05-01 07:15:45.123456789", tz="UTC") - assert ts._value == expected_value - assert expected_repr in repr(ts) - - ts = Timestamp("2013-05-01 07:15:45.123456789", tz="US/Eastern") - assert ts._value == expected_value + 4 * 3600 * 1_000_000_000 - assert expected_repr in repr(ts) - - # GH 10041 - ts = Timestamp("20130501T071545.123456789") - assert ts._value == expected_value - assert expected_repr in repr(ts) - - def test_nanosecond_timestamp(self): - # GH 7610 - expected = 1_293_840_000_000_000_005 - t = Timestamp("2011-01-01") + offsets.Nano(5) - assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000005')" - assert t._value == expected - assert t.nanosecond == 5 - - t = Timestamp(t) - assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000005')" - assert t._value == expected - assert t.nanosecond == 5 - - t = Timestamp("2011-01-01 00:00:00.000000005") - assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000005')" - assert t._value == expected - assert t.nanosecond == 5 - - expected = 1_293_840_000_000_000_010 - t = t + offsets.Nano(5) - assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000010')" - assert t._value == expected - assert t.nanosecond == 10 - - t = Timestamp(t) - assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000010')" - assert t._value == expected - assert t.nanosecond == 10 - - t = Timestamp("2011-01-01 00:00:00.000000010") - assert repr(t) == "Timestamp('2011-01-01 00:00:00.000000010')" - assert t._value == expected - assert t.nanosecond == 10 - - -class TestTimestampToJulianDate: - def test_compare_1700(self): - r = Timestamp("1700-06-23").to_julian_date() - assert r == 2_342_145.5 - - def test_compare_2000(self): - r = Timestamp("2000-04-12").to_julian_date() - assert r == 2_451_646.5 - - def test_compare_2100(self): - r = Timestamp("2100-08-12").to_julian_date() - assert r == 2_488_292.5 - - def test_compare_hour01(self): - r = Timestamp("2000-08-12T01:00:00").to_julian_date() - assert r == 2_451_768.5416666666666666 - - def test_compare_hour13(self): - r = Timestamp("2000-08-12T13:00:00").to_julian_date() - assert r == 2_451_769.0416666666666666 - - -class TestTimestampConversion: - def test_conversion(self): - # GH#9255 - ts = Timestamp("2000-01-01").as_unit("ns") - - result = ts.to_pydatetime() - expected = datetime(2000, 1, 1) - assert result == expected - assert type(result) == type(expected) - - result = ts.to_datetime64() - expected = np.datetime64(ts._value, "ns") - assert result == expected - assert type(result) == type(expected) - assert result.dtype == expected.dtype - - def test_to_pydatetime_fold(self): - # GH#45087 - tzstr = "dateutil/usr/share/zoneinfo/America/Chicago" - ts = Timestamp(year=2013, month=11, day=3, hour=1, minute=0, fold=1, tz=tzstr) - dt = ts.to_pydatetime() - assert dt.fold == 1 - - def test_to_pydatetime_nonzero_nano(self): - ts = Timestamp("2011-01-01 9:00:00.123456789") - - # Warn the user of data loss (nanoseconds). - with tm.assert_produces_warning(UserWarning): - expected = datetime(2011, 1, 1, 9, 0, 0, 123456) - result = ts.to_pydatetime() - assert result == expected - - def test_timestamp_to_datetime(self): - stamp = Timestamp("20090415", tz="US/Eastern") - dtval = stamp.to_pydatetime() - assert stamp == dtval - assert stamp.tzinfo == dtval.tzinfo - - def test_timestamp_to_datetime_dateutil(self): - stamp = Timestamp("20090415", tz="dateutil/US/Eastern") - dtval = stamp.to_pydatetime() - assert stamp == dtval - assert stamp.tzinfo == dtval.tzinfo - - def test_timestamp_to_datetime_explicit_pytz(self): - stamp = Timestamp("20090415", tz=pytz.timezone("US/Eastern")) - dtval = stamp.to_pydatetime() - assert stamp == dtval - assert stamp.tzinfo == dtval.tzinfo - - @td.skip_if_windows - def test_timestamp_to_datetime_explicit_dateutil(self): - stamp = Timestamp("20090415", tz=gettz("US/Eastern")) - dtval = stamp.to_pydatetime() - assert stamp == dtval - assert stamp.tzinfo == dtval.tzinfo - - def test_to_datetime_bijective(self): - # Ensure that converting to datetime and back only loses precision - # by going from nanoseconds to microseconds. - exp_warning = None if Timestamp.max.nanosecond == 0 else UserWarning - with tm.assert_produces_warning(exp_warning): - pydt_max = Timestamp.max.to_pydatetime() - - assert ( - Timestamp(pydt_max).as_unit("ns")._value / 1000 - == Timestamp.max._value / 1000 - ) - - exp_warning = None if Timestamp.min.nanosecond == 0 else UserWarning - with tm.assert_produces_warning(exp_warning): - pydt_min = Timestamp.min.to_pydatetime() - - # The next assertion can be enabled once GH#39221 is merged - # assert pydt_min < Timestamp.min # this is bc nanos are dropped - tdus = timedelta(microseconds=1) - assert pydt_min + tdus > Timestamp.min - - assert ( - Timestamp(pydt_min + tdus).as_unit("ns")._value / 1000 - == Timestamp.min._value / 1000 - ) - - def test_to_period_tz_warning(self): - # GH#21333 make sure a warning is issued when timezone - # info is lost - ts = Timestamp("2009-04-15 16:17:18", tz="US/Eastern") - with tm.assert_produces_warning(UserWarning): - # warning that timezone info will be lost - ts.to_period("D") - - def test_to_numpy_alias(self): - # GH 24653: alias .to_numpy() for scalars - ts = Timestamp(datetime.now()) - assert ts.to_datetime64() == ts.to_numpy() - - # GH#44460 - msg = "dtype and copy arguments are ignored" - with pytest.raises(ValueError, match=msg): - ts.to_numpy("M8[s]") - with pytest.raises(ValueError, match=msg): - ts.to_numpy(copy=True) - - -class SubDatetime(datetime): - pass - - -@pytest.mark.parametrize( - "lh,rh", - [ - (SubDatetime(2000, 1, 1), Timedelta(hours=1)), - (Timedelta(hours=1), SubDatetime(2000, 1, 1)), - ], -) -def test_dt_subclass_add_timedelta(lh, rh): - # GH#25851 - # ensure that subclassed datetime works for - # Timedelta operations - result = lh + rh - expected = SubDatetime(2000, 1, 1, 1) - assert result == expected - - -class TestNonNano: - @pytest.fixture(params=["s", "ms", "us"]) - def reso(self, request): - return request.param - - @pytest.fixture - def dt64(self, reso): - # cases that are in-bounds for nanosecond, so we can compare against - # the existing implementation. - return np.datetime64("2016-01-01", reso) - - @pytest.fixture - def ts(self, dt64): - return Timestamp._from_dt64(dt64) - - @pytest.fixture - def ts_tz(self, ts, tz_aware_fixture): - tz = maybe_get_tz(tz_aware_fixture) - return Timestamp._from_value_and_reso(ts._value, ts._creso, tz) - - def test_non_nano_construction(self, dt64, ts, reso): - assert ts._value == dt64.view("i8") - - if reso == "s": - assert ts._creso == NpyDatetimeUnit.NPY_FR_s.value - elif reso == "ms": - assert ts._creso == NpyDatetimeUnit.NPY_FR_ms.value - elif reso == "us": - assert ts._creso == NpyDatetimeUnit.NPY_FR_us.value - - def test_non_nano_fields(self, dt64, ts): - alt = Timestamp(dt64) - - assert ts.year == alt.year - assert ts.month == alt.month - assert ts.day == alt.day - assert ts.hour == ts.minute == ts.second == ts.microsecond == 0 - assert ts.nanosecond == 0 - - assert ts.to_julian_date() == alt.to_julian_date() - assert ts.weekday() == alt.weekday() - assert ts.isoweekday() == alt.isoweekday() - - def test_start_end_fields(self, ts): - assert ts.is_year_start - assert ts.is_quarter_start - assert ts.is_month_start - assert not ts.is_year_end - assert not ts.is_month_end - assert not ts.is_month_end - - # 2016-01-01 is a Friday, so is year/quarter/month start with this freq - assert ts.is_year_start - assert ts.is_quarter_start - assert ts.is_month_start - assert not ts.is_year_end - assert not ts.is_month_end - assert not ts.is_month_end - - def test_day_name(self, dt64, ts): - alt = Timestamp(dt64) - assert ts.day_name() == alt.day_name() - - def test_month_name(self, dt64, ts): - alt = Timestamp(dt64) - assert ts.month_name() == alt.month_name() - - def test_tz_convert(self, ts): - ts = Timestamp._from_value_and_reso(ts._value, ts._creso, utc) - - tz = pytz.timezone("US/Pacific") - result = ts.tz_convert(tz) - - assert isinstance(result, Timestamp) - assert result._creso == ts._creso - assert tz_compare(result.tz, tz) - - def test_repr(self, dt64, ts): - alt = Timestamp(dt64) - - assert str(ts) == str(alt) - assert repr(ts) == repr(alt) - - def test_comparison(self, dt64, ts): - alt = Timestamp(dt64) - - assert ts == dt64 - assert dt64 == ts - assert ts == alt - assert alt == ts - - assert not ts != dt64 - assert not dt64 != ts - assert not ts != alt - assert not alt != ts - - assert not ts < dt64 - assert not dt64 < ts - assert not ts < alt - assert not alt < ts - - assert not ts > dt64 - assert not dt64 > ts - assert not ts > alt - assert not alt > ts - - assert ts >= dt64 - assert dt64 >= ts - assert ts >= alt - assert alt >= ts - - assert ts <= dt64 - assert dt64 <= ts - assert ts <= alt - assert alt <= ts - - def test_cmp_cross_reso(self): - # numpy gets this wrong because of silent overflow - dt64 = np.datetime64(9223372800, "s") # won't fit in M8[ns] - ts = Timestamp._from_dt64(dt64) - - # subtracting 3600*24 gives a datetime64 that _can_ fit inside the - # nanosecond implementation bounds. - other = Timestamp(dt64 - 3600 * 24).as_unit("ns") - assert other < ts - assert other.asm8 > ts.asm8 # <- numpy gets this wrong - assert ts > other - assert ts.asm8 < other.asm8 # <- numpy gets this wrong - assert not other == ts - assert ts != other - - @pytest.mark.xfail(reason="Dispatches to np.datetime64 which is wrong") - def test_cmp_cross_reso_reversed_dt64(self): - dt64 = np.datetime64(106752, "D") # won't fit in M8[ns] - ts = Timestamp._from_dt64(dt64) - other = Timestamp(dt64 - 1) - - assert other.asm8 < ts - - def test_pickle(self, ts, tz_aware_fixture): - tz = tz_aware_fixture - tz = maybe_get_tz(tz) - ts = Timestamp._from_value_and_reso(ts._value, ts._creso, tz) - rt = tm.round_trip_pickle(ts) - assert rt._creso == ts._creso - assert rt == ts - - def test_normalize(self, dt64, ts): - alt = Timestamp(dt64) - result = ts.normalize() - assert result._creso == ts._creso - assert result == alt.normalize() - - def test_asm8(self, dt64, ts): - rt = ts.asm8 - assert rt == dt64 - assert rt.dtype == dt64.dtype - - def test_to_numpy(self, dt64, ts): - res = ts.to_numpy() - assert res == dt64 - assert res.dtype == dt64.dtype - - def test_to_datetime64(self, dt64, ts): - res = ts.to_datetime64() - assert res == dt64 - assert res.dtype == dt64.dtype - - def test_timestamp(self, dt64, ts): - alt = Timestamp(dt64) - assert ts.timestamp() == alt.timestamp() - - def test_to_period(self, dt64, ts): - alt = Timestamp(dt64) - assert ts.to_period("D") == alt.to_period("D") - - @pytest.mark.parametrize( - "td", [timedelta(days=4), Timedelta(days=4), np.timedelta64(4, "D")] - ) - def test_addsub_timedeltalike_non_nano(self, dt64, ts, td): - exp_reso = max(ts._creso, Timedelta(td)._creso) - - result = ts - td - expected = Timestamp(dt64) - td - assert isinstance(result, Timestamp) - assert result._creso == exp_reso - assert result == expected - - result = ts + td - expected = Timestamp(dt64) + td - assert isinstance(result, Timestamp) - assert result._creso == exp_reso - assert result == expected - - result = td + ts - expected = td + Timestamp(dt64) - assert isinstance(result, Timestamp) - assert result._creso == exp_reso - assert result == expected - - def test_addsub_offset(self, ts_tz): - # specifically non-Tick offset - off = offsets.YearEnd(1) - result = ts_tz + off - - assert isinstance(result, Timestamp) - assert result._creso == ts_tz._creso - if ts_tz.month == 12 and ts_tz.day == 31: - assert result.year == ts_tz.year + 1 - else: - assert result.year == ts_tz.year - assert result.day == 31 - assert result.month == 12 - assert tz_compare(result.tz, ts_tz.tz) - - result = ts_tz - off - - assert isinstance(result, Timestamp) - assert result._creso == ts_tz._creso - assert result.year == ts_tz.year - 1 - assert result.day == 31 - assert result.month == 12 - assert tz_compare(result.tz, ts_tz.tz) - - def test_sub_datetimelike_mismatched_reso(self, ts_tz): - # case with non-lossy rounding - ts = ts_tz - - # choose a unit for `other` that doesn't match ts_tz's; - # this construction ensures we get cases with other._creso < ts._creso - # and cases with other._creso > ts._creso - unit = { - NpyDatetimeUnit.NPY_FR_us.value: "ms", - NpyDatetimeUnit.NPY_FR_ms.value: "s", - NpyDatetimeUnit.NPY_FR_s.value: "us", - }[ts._creso] - other = ts.as_unit(unit) - assert other._creso != ts._creso - - result = ts - other - assert isinstance(result, Timedelta) - assert result._value == 0 - assert result._creso == max(ts._creso, other._creso) - - result = other - ts - assert isinstance(result, Timedelta) - assert result._value == 0 - assert result._creso == max(ts._creso, other._creso) - - if ts._creso < other._creso: - # Case where rounding is lossy - other2 = other + Timedelta._from_value_and_reso(1, other._creso) - exp = ts.as_unit(other.unit) - other2 - - res = ts - other2 - assert res == exp - assert res._creso == max(ts._creso, other._creso) - - res = other2 - ts - assert res == -exp - assert res._creso == max(ts._creso, other._creso) - else: - ts2 = ts + Timedelta._from_value_and_reso(1, ts._creso) - exp = ts2 - other.as_unit(ts2.unit) - - res = ts2 - other - assert res == exp - assert res._creso == max(ts._creso, other._creso) - res = other - ts2 - assert res == -exp - assert res._creso == max(ts._creso, other._creso) - - def test_sub_timedeltalike_mismatched_reso(self, ts_tz): - # case with non-lossy rounding - ts = ts_tz - - # choose a unit for `other` that doesn't match ts_tz's; - # this construction ensures we get cases with other._creso < ts._creso - # and cases with other._creso > ts._creso - unit = { - NpyDatetimeUnit.NPY_FR_us.value: "ms", - NpyDatetimeUnit.NPY_FR_ms.value: "s", - NpyDatetimeUnit.NPY_FR_s.value: "us", - }[ts._creso] - other = Timedelta(0).as_unit(unit) - assert other._creso != ts._creso - - result = ts + other - assert isinstance(result, Timestamp) - assert result == ts - assert result._creso == max(ts._creso, other._creso) - - result = other + ts - assert isinstance(result, Timestamp) - assert result == ts - assert result._creso == max(ts._creso, other._creso) - - if ts._creso < other._creso: - # Case where rounding is lossy - other2 = other + Timedelta._from_value_and_reso(1, other._creso) - exp = ts.as_unit(other.unit) + other2 - res = ts + other2 - assert res == exp - assert res._creso == max(ts._creso, other._creso) - res = other2 + ts - assert res == exp - assert res._creso == max(ts._creso, other._creso) - else: - ts2 = ts + Timedelta._from_value_and_reso(1, ts._creso) - exp = ts2 + other.as_unit(ts2.unit) - - res = ts2 + other - assert res == exp - assert res._creso == max(ts._creso, other._creso) - res = other + ts2 - assert res == exp - assert res._creso == max(ts._creso, other._creso) - - def test_addition_doesnt_downcast_reso(self): - # https://github.com/pandas-dev/pandas/pull/48748#pullrequestreview-1122635413 - ts = Timestamp(year=2022, month=1, day=1, microsecond=999999).as_unit("us") - td = Timedelta(microseconds=1).as_unit("us") - res = ts + td - assert res._creso == ts._creso - - def test_sub_timedelta64_mismatched_reso(self, ts_tz): - ts = ts_tz - - res = ts + np.timedelta64(1, "ns") - exp = ts.as_unit("ns") + np.timedelta64(1, "ns") - assert exp == res - assert exp._creso == NpyDatetimeUnit.NPY_FR_ns.value - - def test_min(self, ts): - assert ts.min <= ts - assert ts.min._creso == ts._creso - assert ts.min._value == NaT._value + 1 - - def test_max(self, ts): - assert ts.max >= ts - assert ts.max._creso == ts._creso - assert ts.max._value == np.iinfo(np.int64).max - - def test_resolution(self, ts): - expected = Timedelta._from_value_and_reso(1, ts._creso) - result = ts.resolution - assert result == expected - assert result._creso == expected._creso - - def test_out_of_ns_bounds(self): - # https://github.com/pandas-dev/pandas/issues/51060 - result = Timestamp(-52700112000, unit="s") - assert result == Timestamp("0300-01-01") - assert result.to_numpy() == np.datetime64("0300-01-01T00:00:00", "s") - - -def test_timestamp_class_min_max_resolution(): - # when accessed on the class (as opposed to an instance), we default - # to nanoseconds - assert Timestamp.min == Timestamp(NaT._value + 1) - assert Timestamp.min._creso == NpyDatetimeUnit.NPY_FR_ns.value - - assert Timestamp.max == Timestamp(np.iinfo(np.int64).max) - assert Timestamp.max._creso == NpyDatetimeUnit.NPY_FR_ns.value - - assert Timestamp.resolution == Timedelta(1) - assert Timestamp.resolution._creso == NpyDatetimeUnit.NPY_FR_ns.value - - -class TestAsUnit: - def test_as_unit(self): - ts = Timestamp("1970-01-01").as_unit("ns") - assert ts.unit == "ns" - - assert ts.as_unit("ns") is ts - - res = ts.as_unit("us") - assert res._value == ts._value // 1000 - assert res._creso == NpyDatetimeUnit.NPY_FR_us.value - - rt = res.as_unit("ns") - assert rt._value == ts._value - assert rt._creso == ts._creso - - res = ts.as_unit("ms") - assert res._value == ts._value // 1_000_000 - assert res._creso == NpyDatetimeUnit.NPY_FR_ms.value - - rt = res.as_unit("ns") - assert rt._value == ts._value - assert rt._creso == ts._creso - - res = ts.as_unit("s") - assert res._value == ts._value // 1_000_000_000 - assert res._creso == NpyDatetimeUnit.NPY_FR_s.value - - rt = res.as_unit("ns") - assert rt._value == ts._value - assert rt._creso == ts._creso - - def test_as_unit_overflows(self): - # microsecond that would be just out of bounds for nano - us = 9223372800000000 - ts = Timestamp._from_value_and_reso(us, NpyDatetimeUnit.NPY_FR_us.value, None) - - msg = "Cannot cast 2262-04-12 00:00:00 to unit='ns' without overflow" - with pytest.raises(OutOfBoundsDatetime, match=msg): - ts.as_unit("ns") - - res = ts.as_unit("ms") - assert res._value == us // 1000 - assert res._creso == NpyDatetimeUnit.NPY_FR_ms.value - - def test_as_unit_rounding(self): - ts = Timestamp(1_500_000) # i.e. 1500 microseconds - res = ts.as_unit("ms") - - expected = Timestamp(1_000_000) # i.e. 1 millisecond - assert res == expected - - assert res._creso == NpyDatetimeUnit.NPY_FR_ms.value - assert res._value == 1 - - with pytest.raises(ValueError, match="Cannot losslessly convert units"): - ts.as_unit("ms", round_ok=False) - - def test_as_unit_non_nano(self): - # case where we are going neither to nor from nano - ts = Timestamp("1970-01-02").as_unit("ms") - assert ts.year == 1970 - assert ts.month == 1 - assert ts.day == 2 - assert ts.hour == ts.minute == ts.second == ts.microsecond == ts.nanosecond == 0 - - res = ts.as_unit("s") - assert res._value == 24 * 3600 - assert res.year == 1970 - assert res.month == 1 - assert res.day == 2 - assert ( - res.hour - == res.minute - == res.second - == res.microsecond - == res.nanosecond - == 0 - ) - - -def test_delimited_date(): - # https://github.com/pandas-dev/pandas/issues/50231 - with tm.assert_produces_warning(None): - result = Timestamp("13-01-2000") - expected = Timestamp(2000, 1, 13) - assert result == expected - - -def test_utctimetuple(): - # GH 32174 - ts = Timestamp("2000-01-01", tz="UTC") - result = ts.utctimetuple() - expected = time.struct_time((2000, 1, 1, 0, 0, 0, 5, 1, 0)) - assert result == expected - - -def test_negative_dates(): - # https://github.com/pandas-dev/pandas/issues/50787 - ts = Timestamp("-2000-01-01") - msg = ( - " not yet supported on Timestamps which are outside the range of " - "Python's standard library. For now, please call the components you need " - r"\(such as `.year` and `.month`\) and construct your string from there.$" - ) - func = "^strftime" - with pytest.raises(NotImplementedError, match=func + msg): - ts.strftime("%Y") - - msg = ( - " not yet supported on Timestamps which " - "are outside the range of Python's standard library. " - ) - func = "^date" - with pytest.raises(NotImplementedError, match=func + msg): - ts.date() - func = "^isocalendar" - with pytest.raises(NotImplementedError, match=func + msg): - ts.isocalendar() - func = "^timetuple" - with pytest.raises(NotImplementedError, match=func + msg): - ts.timetuple() - func = "^toordinal" - with pytest.raises(NotImplementedError, match=func + msg): - ts.toordinal() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_combine_first.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_combine_first.py deleted file mode 100644 index d2d8eab1cb38bd0acda702405aff77dc97d4a6ab..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_combine_first.py +++ /dev/null @@ -1,149 +0,0 @@ -from datetime import datetime - -import numpy as np - -import pandas as pd -from pandas import ( - Period, - Series, - date_range, - period_range, - to_datetime, -) -import pandas._testing as tm - - -class TestCombineFirst: - def test_combine_first_period_datetime(self): - # GH#3367 - didx = date_range(start="1950-01-31", end="1950-07-31", freq="M") - pidx = period_range(start=Period("1950-1"), end=Period("1950-7"), freq="M") - # check to be consistent with DatetimeIndex - for idx in [didx, pidx]: - a = Series([1, np.nan, np.nan, 4, 5, np.nan, 7], index=idx) - b = Series([9, 9, 9, 9, 9, 9, 9], index=idx) - - result = a.combine_first(b) - expected = Series([1, 9, 9, 4, 5, 9, 7], index=idx, dtype=np.float64) - tm.assert_series_equal(result, expected) - - def test_combine_first_name(self, datetime_series): - result = datetime_series.combine_first(datetime_series[:5]) - assert result.name == datetime_series.name - - def test_combine_first(self): - values = tm.makeIntIndex(20).values.astype(float) - series = Series(values, index=tm.makeIntIndex(20)) - - series_copy = series * 2 - series_copy[::2] = np.nan - - # nothing used from the input - combined = series.combine_first(series_copy) - - tm.assert_series_equal(combined, series) - - # Holes filled from input - combined = series_copy.combine_first(series) - assert np.isfinite(combined).all() - - tm.assert_series_equal(combined[::2], series[::2]) - tm.assert_series_equal(combined[1::2], series_copy[1::2]) - - # mixed types - index = tm.makeStringIndex(20) - floats = Series(np.random.default_rng(2).standard_normal(20), index=index) - strings = Series(tm.makeStringIndex(10), index=index[::2]) - - combined = strings.combine_first(floats) - - tm.assert_series_equal(strings, combined.loc[index[::2]]) - tm.assert_series_equal(floats[1::2].astype(object), combined.loc[index[1::2]]) - - # corner case - ser = Series([1.0, 2, 3], index=[0, 1, 2]) - empty = Series([], index=[], dtype=object) - msg = "The behavior of array concatenation with empty entries is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg): - result = ser.combine_first(empty) - ser.index = ser.index.astype("O") - tm.assert_series_equal(ser, result) - - def test_combine_first_dt64(self): - s0 = to_datetime(Series(["2010", np.nan])) - s1 = to_datetime(Series([np.nan, "2011"])) - rs = s0.combine_first(s1) - xp = to_datetime(Series(["2010", "2011"])) - tm.assert_series_equal(rs, xp) - - s0 = to_datetime(Series(["2010", np.nan])) - s1 = Series([np.nan, "2011"]) - rs = s0.combine_first(s1) - - xp = Series([datetime(2010, 1, 1), "2011"], dtype="datetime64[ns]") - - tm.assert_series_equal(rs, xp) - - def test_combine_first_dt_tz_values(self, tz_naive_fixture): - ser1 = Series( - pd.DatetimeIndex(["20150101", "20150102", "20150103"], tz=tz_naive_fixture), - name="ser1", - ) - ser2 = Series( - pd.DatetimeIndex(["20160514", "20160515", "20160516"], tz=tz_naive_fixture), - index=[2, 3, 4], - name="ser2", - ) - result = ser1.combine_first(ser2) - exp_vals = pd.DatetimeIndex( - ["20150101", "20150102", "20150103", "20160515", "20160516"], - tz=tz_naive_fixture, - ) - exp = Series(exp_vals, name="ser1") - tm.assert_series_equal(exp, result) - - def test_combine_first_timezone_series_with_empty_series(self): - # GH 41800 - time_index = date_range( - datetime(2021, 1, 1, 1), - datetime(2021, 1, 1, 10), - freq="H", - tz="Europe/Rome", - ) - s1 = Series(range(10), index=time_index) - s2 = Series(index=time_index) - msg = "The behavior of array concatenation with empty entries is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg): - result = s1.combine_first(s2) - tm.assert_series_equal(result, s1) - - def test_combine_first_preserves_dtype(self): - # GH51764 - s1 = Series([1666880195890293744, 1666880195890293837]) - s2 = Series([1, 2, 3]) - result = s1.combine_first(s2) - expected = Series([1666880195890293744, 1666880195890293837, 3]) - tm.assert_series_equal(result, expected) - - def test_combine_mixed_timezone(self): - # GH 26283 - uniform_tz = Series({pd.Timestamp("2019-05-01", tz="UTC"): 1.0}) - multi_tz = Series( - { - pd.Timestamp("2019-05-01 01:00:00+0100", tz="Europe/London"): 2.0, - pd.Timestamp("2019-05-02", tz="UTC"): 3.0, - } - ) - - result = uniform_tz.combine_first(multi_tz) - expected = Series( - [1.0, 3.0], - index=pd.Index( - [ - pd.Timestamp("2019-05-01 00:00:00+00:00", tz="UTC"), - pd.Timestamp("2019-05-02 00:00:00+00:00", tz="UTC"), - ], - dtype="object", - ), - ) - tm.assert_series_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/operations/build/metadata_editable.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/operations/build/metadata_editable.py deleted file mode 100644 index 4c3f48b6cdfb3087a833546410fc810a343b9e13..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/operations/build/metadata_editable.py +++ /dev/null @@ -1,41 +0,0 @@ -"""Metadata generation logic for source distributions. -""" - -import os - -from pip._vendor.pep517.wrappers import Pep517HookCaller - -from pip._internal.build_env import BuildEnvironment -from pip._internal.exceptions import ( - InstallationSubprocessError, - MetadataGenerationFailed, -) -from pip._internal.utils.subprocess import runner_with_spinner_message -from pip._internal.utils.temp_dir import TempDirectory - - -def generate_editable_metadata( - build_env: BuildEnvironment, backend: Pep517HookCaller, details: str -) -> str: - """Generate metadata using mechanisms described in PEP 660. - - Returns the generated metadata directory. - """ - metadata_tmpdir = TempDirectory(kind="modern-metadata", globally_managed=True) - - metadata_dir = metadata_tmpdir.path - - with build_env: - # Note that Pep517HookCaller implements a fallback for - # prepare_metadata_for_build_wheel/editable, so we don't have to - # consider the possibility that this hook doesn't exist. - runner = runner_with_spinner_message( - "Preparing editable metadata (pyproject.toml)" - ) - with backend.subprocess_runner(runner): - try: - distinfo_dir = backend.prepare_metadata_for_build_editable(metadata_dir) - except InstallationSubprocessError as error: - raise MetadataGenerationFailed(package_details=details) from error - - return os.path.join(metadata_dir, distinfo_dir) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py deleted file mode 100644 index 720b507c523ddb54fef42245f1ace7610f2cf8b5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py +++ /dev/null @@ -1,37 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -from __future__ import division - -from datetime import datetime -from pip._vendor.cachecontrol.cache import BaseCache - - -class RedisCache(BaseCache): - - def __init__(self, conn): - self.conn = conn - - def get(self, key): - return self.conn.get(key) - - def set(self, key, value, expires=None): - if not expires: - self.conn.set(key, value) - else: - expires = expires - datetime.utcnow() - self.conn.setex(key, int(expires.total_seconds()), value) - - def delete(self, key): - self.conn.delete(key) - - def clear(self): - """Helper for clearing all the keys in a database. Use with - caution!""" - for key in self.conn.keys(): - self.conn.delete(key) - - def close(self): - """Redis uses connection pooling, no need to close the connection.""" - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/text.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/text.py deleted file mode 100644 index ea12c09d7296b4fe36be0b0b37269947cb702d3c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/text.py +++ /dev/null @@ -1,1282 +0,0 @@ -import re -from functools import partial, reduce -from math import gcd -from operator import itemgetter -from pip._vendor.rich.emoji import EmojiVariant -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Dict, - Iterable, - List, - NamedTuple, - Optional, - Tuple, - Union, -) - -from ._loop import loop_last -from ._pick import pick_bool -from ._wrap import divide_line -from .align import AlignMethod -from .cells import cell_len, set_cell_size -from .containers import Lines -from .control import strip_control_codes -from .emoji import EmojiVariant -from .jupyter import JupyterMixin -from .measure import Measurement -from .segment import Segment -from .style import Style, StyleType - -if TYPE_CHECKING: # pragma: no cover - from .console import Console, ConsoleOptions, JustifyMethod, OverflowMethod - -DEFAULT_JUSTIFY: "JustifyMethod" = "default" -DEFAULT_OVERFLOW: "OverflowMethod" = "fold" - - -_re_whitespace = re.compile(r"\s+$") - -TextType = Union[str, "Text"] - -GetStyleCallable = Callable[[str], Optional[StyleType]] - - -class Span(NamedTuple): - """A marked up region in some text.""" - - start: int - """Span start index.""" - end: int - """Span end index.""" - style: Union[str, Style] - """Style associated with the span.""" - - def __repr__(self) -> str: - return ( - f"Span({self.start}, {self.end}, {self.style!r})" - if (isinstance(self.style, Style) and self.style._meta) - else f"Span({self.start}, {self.end}, {repr(self.style)})" - ) - - def __bool__(self) -> bool: - return self.end > self.start - - def split(self, offset: int) -> Tuple["Span", Optional["Span"]]: - """Split a span in to 2 from a given offset.""" - - if offset < self.start: - return self, None - if offset >= self.end: - return self, None - - start, end, style = self - span1 = Span(start, min(end, offset), style) - span2 = Span(span1.end, end, style) - return span1, span2 - - def move(self, offset: int) -> "Span": - """Move start and end by a given offset. - - Args: - offset (int): Number of characters to add to start and end. - - Returns: - TextSpan: A new TextSpan with adjusted position. - """ - start, end, style = self - return Span(start + offset, end + offset, style) - - def right_crop(self, offset: int) -> "Span": - """Crop the span at the given offset. - - Args: - offset (int): A value between start and end. - - Returns: - Span: A new (possibly smaller) span. - """ - start, end, style = self - if offset >= end: - return self - return Span(start, min(offset, end), style) - - -class Text(JupyterMixin): - """Text with color / style. - - Args: - text (str, optional): Default unstyled text. Defaults to "". - style (Union[str, Style], optional): Base style for text. Defaults to "". - justify (str, optional): Justify method: "left", "center", "full", "right". Defaults to None. - overflow (str, optional): Overflow method: "crop", "fold", "ellipsis". Defaults to None. - no_wrap (bool, optional): Disable text wrapping, or None for default. Defaults to None. - end (str, optional): Character to end text with. Defaults to "\\\\n". - tab_size (int): Number of spaces per tab, or ``None`` to use ``console.tab_size``. Defaults to 8. - spans (List[Span], optional). A list of predefined style spans. Defaults to None. - """ - - __slots__ = [ - "_text", - "style", - "justify", - "overflow", - "no_wrap", - "end", - "tab_size", - "_spans", - "_length", - ] - - def __init__( - self, - text: str = "", - style: Union[str, Style] = "", - *, - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - no_wrap: Optional[bool] = None, - end: str = "\n", - tab_size: Optional[int] = 8, - spans: Optional[List[Span]] = None, - ) -> None: - self._text = [strip_control_codes(text)] - self.style = style - self.justify: Optional["JustifyMethod"] = justify - self.overflow: Optional["OverflowMethod"] = overflow - self.no_wrap = no_wrap - self.end = end - self.tab_size = tab_size - self._spans: List[Span] = spans or [] - self._length: int = len(text) - - def __len__(self) -> int: - return self._length - - def __bool__(self) -> bool: - return bool(self._length) - - def __str__(self) -> str: - return self.plain - - def __repr__(self) -> str: - return f"" - - def __add__(self, other: Any) -> "Text": - if isinstance(other, (str, Text)): - result = self.copy() - result.append(other) - return result - return NotImplemented - - def __eq__(self, other: object) -> bool: - if not isinstance(other, Text): - return NotImplemented - return self.plain == other.plain and self._spans == other._spans - - def __contains__(self, other: object) -> bool: - if isinstance(other, str): - return other in self.plain - elif isinstance(other, Text): - return other.plain in self.plain - return False - - def __getitem__(self, slice: Union[int, slice]) -> "Text": - def get_text_at(offset: int) -> "Text": - _Span = Span - text = Text( - self.plain[offset], - spans=[ - _Span(0, 1, style) - for start, end, style in self._spans - if end > offset >= start - ], - end="", - ) - return text - - if isinstance(slice, int): - return get_text_at(slice) - else: - start, stop, step = slice.indices(len(self.plain)) - if step == 1: - lines = self.divide([start, stop]) - return lines[1] - else: - # This would be a bit of work to implement efficiently - # For now, its not required - raise TypeError("slices with step!=1 are not supported") - - @property - def cell_len(self) -> int: - """Get the number of cells required to render this text.""" - return cell_len(self.plain) - - @property - def markup(self) -> str: - """Get console markup to render this Text. - - Returns: - str: A string potentially creating markup tags. - """ - from .markup import escape - - output: List[str] = [] - - plain = self.plain - markup_spans = [ - (0, False, self.style), - *((span.start, False, span.style) for span in self._spans), - *((span.end, True, span.style) for span in self._spans), - (len(plain), True, self.style), - ] - markup_spans.sort(key=itemgetter(0, 1)) - position = 0 - append = output.append - for offset, closing, style in markup_spans: - if offset > position: - append(escape(plain[position:offset])) - position = offset - if style: - append(f"[/{style}]" if closing else f"[{style}]") - markup = "".join(output) - return markup - - @classmethod - def from_markup( - cls, - text: str, - *, - style: Union[str, Style] = "", - emoji: bool = True, - emoji_variant: Optional[EmojiVariant] = None, - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - ) -> "Text": - """Create Text instance from markup. - - Args: - text (str): A string containing console markup. - emoji (bool, optional): Also render emoji code. Defaults to True. - justify (str, optional): Justify method: "left", "center", "full", "right". Defaults to None. - overflow (str, optional): Overflow method: "crop", "fold", "ellipsis". Defaults to None. - - Returns: - Text: A Text instance with markup rendered. - """ - from .markup import render - - rendered_text = render(text, style, emoji=emoji, emoji_variant=emoji_variant) - rendered_text.justify = justify - rendered_text.overflow = overflow - return rendered_text - - @classmethod - def from_ansi( - cls, - text: str, - *, - style: Union[str, Style] = "", - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - no_wrap: Optional[bool] = None, - end: str = "\n", - tab_size: Optional[int] = 8, - ) -> "Text": - """Create a Text object from a string containing ANSI escape codes. - - Args: - text (str): A string containing escape codes. - style (Union[str, Style], optional): Base style for text. Defaults to "". - justify (str, optional): Justify method: "left", "center", "full", "right". Defaults to None. - overflow (str, optional): Overflow method: "crop", "fold", "ellipsis". Defaults to None. - no_wrap (bool, optional): Disable text wrapping, or None for default. Defaults to None. - end (str, optional): Character to end text with. Defaults to "\\\\n". - tab_size (int): Number of spaces per tab, or ``None`` to use ``console.tab_size``. Defaults to 8. - """ - from .ansi import AnsiDecoder - - joiner = Text( - "\n", - justify=justify, - overflow=overflow, - no_wrap=no_wrap, - end=end, - tab_size=tab_size, - style=style, - ) - decoder = AnsiDecoder() - result = joiner.join(line for line in decoder.decode(text)) - return result - - @classmethod - def styled( - cls, - text: str, - style: StyleType = "", - *, - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - ) -> "Text": - """Construct a Text instance with a pre-applied styled. A style applied in this way won't be used - to pad the text when it is justified. - - Args: - text (str): A string containing console markup. - style (Union[str, Style]): Style to apply to the text. Defaults to "". - justify (str, optional): Justify method: "left", "center", "full", "right". Defaults to None. - overflow (str, optional): Overflow method: "crop", "fold", "ellipsis". Defaults to None. - - Returns: - Text: A text instance with a style applied to the entire string. - """ - styled_text = cls(text, justify=justify, overflow=overflow) - styled_text.stylize(style) - return styled_text - - @classmethod - def assemble( - cls, - *parts: Union[str, "Text", Tuple[str, StyleType]], - style: Union[str, Style] = "", - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - no_wrap: Optional[bool] = None, - end: str = "\n", - tab_size: int = 8, - meta: Optional[Dict[str, Any]] = None, - ) -> "Text": - """Construct a text instance by combining a sequence of strings with optional styles. - The positional arguments should be either strings, or a tuple of string + style. - - Args: - style (Union[str, Style], optional): Base style for text. Defaults to "". - justify (str, optional): Justify method: "left", "center", "full", "right". Defaults to None. - overflow (str, optional): Overflow method: "crop", "fold", "ellipsis". Defaults to None. - end (str, optional): Character to end text with. Defaults to "\\\\n". - tab_size (int): Number of spaces per tab, or ``None`` to use ``console.tab_size``. Defaults to 8. - meta (Dict[str, Any], optional). Meta data to apply to text, or None for no meta data. Default to None - - Returns: - Text: A new text instance. - """ - text = cls( - style=style, - justify=justify, - overflow=overflow, - no_wrap=no_wrap, - end=end, - tab_size=tab_size, - ) - append = text.append - _Text = Text - for part in parts: - if isinstance(part, (_Text, str)): - append(part) - else: - append(*part) - if meta: - text.apply_meta(meta) - return text - - @property - def plain(self) -> str: - """Get the text as a single string.""" - if len(self._text) != 1: - self._text[:] = ["".join(self._text)] - return self._text[0] - - @plain.setter - def plain(self, new_text: str) -> None: - """Set the text to a new value.""" - if new_text != self.plain: - self._text[:] = [new_text] - old_length = self._length - self._length = len(new_text) - if old_length > self._length: - self._trim_spans() - - @property - def spans(self) -> List[Span]: - """Get a reference to the internal list of spans.""" - return self._spans - - @spans.setter - def spans(self, spans: List[Span]) -> None: - """Set spans.""" - self._spans = spans[:] - - def blank_copy(self, plain: str = "") -> "Text": - """Return a new Text instance with copied meta data (but not the string or spans).""" - copy_self = Text( - plain, - style=self.style, - justify=self.justify, - overflow=self.overflow, - no_wrap=self.no_wrap, - end=self.end, - tab_size=self.tab_size, - ) - return copy_self - - def copy(self) -> "Text": - """Return a copy of this instance.""" - copy_self = Text( - self.plain, - style=self.style, - justify=self.justify, - overflow=self.overflow, - no_wrap=self.no_wrap, - end=self.end, - tab_size=self.tab_size, - ) - copy_self._spans[:] = self._spans - return copy_self - - def stylize( - self, - style: Union[str, Style], - start: int = 0, - end: Optional[int] = None, - ) -> None: - """Apply a style to the text, or a portion of the text. - - Args: - style (Union[str, Style]): Style instance or style definition to apply. - start (int): Start offset (negative indexing is supported). Defaults to 0. - end (Optional[int], optional): End offset (negative indexing is supported), or None for end of text. Defaults to None. - - """ - if style: - length = len(self) - if start < 0: - start = length + start - if end is None: - end = length - if end < 0: - end = length + end - if start >= length or end <= start: - # Span not in text or not valid - return - self._spans.append(Span(start, min(length, end), style)) - - def apply_meta( - self, meta: Dict[str, Any], start: int = 0, end: Optional[int] = None - ) -> None: - """Apply meta data to the text, or a portion of the text. - - Args: - meta (Dict[str, Any]): A dict of meta information. - start (int): Start offset (negative indexing is supported). Defaults to 0. - end (Optional[int], optional): End offset (negative indexing is supported), or None for end of text. Defaults to None. - - """ - style = Style.from_meta(meta) - self.stylize(style, start=start, end=end) - - def on(self, meta: Optional[Dict[str, Any]] = None, **handlers: Any) -> "Text": - """Apply event handlers (used by Textual project). - - Example: - >>> from rich.text import Text - >>> text = Text("hello world") - >>> text.on(click="view.toggle('world')") - - Args: - meta (Dict[str, Any]): Mapping of meta information. - **handlers: Keyword args are prefixed with "@" to defined handlers. - - Returns: - Text: Self is returned to method may be chained. - """ - meta = {} if meta is None else meta - meta.update({f"@{key}": value for key, value in handlers.items()}) - self.stylize(Style.from_meta(meta)) - return self - - def remove_suffix(self, suffix: str) -> None: - """Remove a suffix if it exists. - - Args: - suffix (str): Suffix to remove. - """ - if self.plain.endswith(suffix): - self.right_crop(len(suffix)) - - def get_style_at_offset(self, console: "Console", offset: int) -> Style: - """Get the style of a character at give offset. - - Args: - console (~Console): Console where text will be rendered. - offset (int): Offset in to text (negative indexing supported) - - Returns: - Style: A Style instance. - """ - # TODO: This is a little inefficient, it is only used by full justify - if offset < 0: - offset = len(self) + offset - get_style = console.get_style - style = get_style(self.style).copy() - for start, end, span_style in self._spans: - if end > offset >= start: - style += get_style(span_style, default="") - return style - - def highlight_regex( - self, - re_highlight: str, - style: Optional[Union[GetStyleCallable, StyleType]] = None, - *, - style_prefix: str = "", - ) -> int: - """Highlight text with a regular expression, where group names are - translated to styles. - - Args: - re_highlight (str): A regular expression. - style (Union[GetStyleCallable, StyleType]): Optional style to apply to whole match, or a callable - which accepts the matched text and returns a style. Defaults to None. - style_prefix (str, optional): Optional prefix to add to style group names. - - Returns: - int: Number of regex matches - """ - count = 0 - append_span = self._spans.append - _Span = Span - plain = self.plain - for match in re.finditer(re_highlight, plain): - get_span = match.span - if style: - start, end = get_span() - match_style = style(plain[start:end]) if callable(style) else style - if match_style is not None and end > start: - append_span(_Span(start, end, match_style)) - - count += 1 - for name in match.groupdict().keys(): - start, end = get_span(name) - if start != -1 and end > start: - append_span(_Span(start, end, f"{style_prefix}{name}")) - return count - - def highlight_words( - self, - words: Iterable[str], - style: Union[str, Style], - *, - case_sensitive: bool = True, - ) -> int: - """Highlight words with a style. - - Args: - words (Iterable[str]): Worlds to highlight. - style (Union[str, Style]): Style to apply. - case_sensitive (bool, optional): Enable case sensitive matchings. Defaults to True. - - Returns: - int: Number of words highlighted. - """ - re_words = "|".join(re.escape(word) for word in words) - add_span = self._spans.append - count = 0 - _Span = Span - for match in re.finditer( - re_words, self.plain, flags=0 if case_sensitive else re.IGNORECASE - ): - start, end = match.span(0) - add_span(_Span(start, end, style)) - count += 1 - return count - - def rstrip(self) -> None: - """Strip whitespace from end of text.""" - self.plain = self.plain.rstrip() - - def rstrip_end(self, size: int) -> None: - """Remove whitespace beyond a certain width at the end of the text. - - Args: - size (int): The desired size of the text. - """ - text_length = len(self) - if text_length > size: - excess = text_length - size - whitespace_match = _re_whitespace.search(self.plain) - if whitespace_match is not None: - whitespace_count = len(whitespace_match.group(0)) - self.right_crop(min(whitespace_count, excess)) - - def set_length(self, new_length: int) -> None: - """Set new length of the text, clipping or padding is required.""" - length = len(self) - if length != new_length: - if length < new_length: - self.pad_right(new_length - length) - else: - self.right_crop(length - new_length) - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> Iterable[Segment]: - tab_size: int = console.tab_size or self.tab_size or 8 - justify = self.justify or options.justify or DEFAULT_JUSTIFY - - overflow = self.overflow or options.overflow or DEFAULT_OVERFLOW - - lines = self.wrap( - console, - options.max_width, - justify=justify, - overflow=overflow, - tab_size=tab_size or 8, - no_wrap=pick_bool(self.no_wrap, options.no_wrap, False), - ) - all_lines = Text("\n").join(lines) - yield from all_lines.render(console, end=self.end) - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> Measurement: - text = self.plain - lines = text.splitlines() - max_text_width = max(cell_len(line) for line in lines) if lines else 0 - words = text.split() - min_text_width = ( - max(cell_len(word) for word in words) if words else max_text_width - ) - return Measurement(min_text_width, max_text_width) - - def render(self, console: "Console", end: str = "") -> Iterable["Segment"]: - """Render the text as Segments. - - Args: - console (Console): Console instance. - end (Optional[str], optional): Optional end character. - - Returns: - Iterable[Segment]: Result of render that may be written to the console. - """ - _Segment = Segment - text = self.plain - if not self._spans: - yield Segment(text) - if end: - yield _Segment(end) - return - get_style = partial(console.get_style, default=Style.null()) - - enumerated_spans = list(enumerate(self._spans, 1)) - style_map = {index: get_style(span.style) for index, span in enumerated_spans} - style_map[0] = get_style(self.style) - - spans = [ - (0, False, 0), - *((span.start, False, index) for index, span in enumerated_spans), - *((span.end, True, index) for index, span in enumerated_spans), - (len(text), True, 0), - ] - spans.sort(key=itemgetter(0, 1)) - - stack: List[int] = [] - stack_append = stack.append - stack_pop = stack.remove - - style_cache: Dict[Tuple[Style, ...], Style] = {} - style_cache_get = style_cache.get - combine = Style.combine - - def get_current_style() -> Style: - """Construct current style from stack.""" - styles = tuple(style_map[_style_id] for _style_id in sorted(stack)) - cached_style = style_cache_get(styles) - if cached_style is not None: - return cached_style - current_style = combine(styles) - style_cache[styles] = current_style - return current_style - - for (offset, leaving, style_id), (next_offset, _, _) in zip(spans, spans[1:]): - if leaving: - stack_pop(style_id) - else: - stack_append(style_id) - if next_offset > offset: - yield _Segment(text[offset:next_offset], get_current_style()) - if end: - yield _Segment(end) - - def join(self, lines: Iterable["Text"]) -> "Text": - """Join text together with this instance as the separator. - - Args: - lines (Iterable[Text]): An iterable of Text instances to join. - - Returns: - Text: A new text instance containing join text. - """ - - new_text = self.blank_copy() - - def iter_text() -> Iterable["Text"]: - if self.plain: - for last, line in loop_last(lines): - yield line - if not last: - yield self - else: - yield from lines - - extend_text = new_text._text.extend - append_span = new_text._spans.append - extend_spans = new_text._spans.extend - offset = 0 - _Span = Span - - for text in iter_text(): - extend_text(text._text) - if text.style: - append_span(_Span(offset, offset + len(text), text.style)) - extend_spans( - _Span(offset + start, offset + end, style) - for start, end, style in text._spans - ) - offset += len(text) - new_text._length = offset - return new_text - - def expand_tabs(self, tab_size: Optional[int] = None) -> None: - """Converts tabs to spaces. - - Args: - tab_size (int, optional): Size of tabs. Defaults to 8. - - """ - if "\t" not in self.plain: - return - pos = 0 - if tab_size is None: - tab_size = self.tab_size - assert tab_size is not None - result = self.blank_copy() - append = result.append - - _style = self.style - for line in self.split("\n", include_separator=True): - parts = line.split("\t", include_separator=True) - for part in parts: - if part.plain.endswith("\t"): - part._text = [part.plain[:-1] + " "] - append(part) - pos += len(part) - spaces = tab_size - ((pos - 1) % tab_size) - 1 - if spaces: - append(" " * spaces, _style) - pos += spaces - else: - append(part) - self._text = [result.plain] - self._length = len(self.plain) - self._spans[:] = result._spans - - def truncate( - self, - max_width: int, - *, - overflow: Optional["OverflowMethod"] = None, - pad: bool = False, - ) -> None: - """Truncate text if it is longer that a given width. - - Args: - max_width (int): Maximum number of characters in text. - overflow (str, optional): Overflow method: "crop", "fold", or "ellipsis". Defaults to None, to use self.overflow. - pad (bool, optional): Pad with spaces if the length is less than max_width. Defaults to False. - """ - _overflow = overflow or self.overflow or DEFAULT_OVERFLOW - if _overflow != "ignore": - length = cell_len(self.plain) - if length > max_width: - if _overflow == "ellipsis": - self.plain = set_cell_size(self.plain, max_width - 1) + "…" - else: - self.plain = set_cell_size(self.plain, max_width) - if pad and length < max_width: - spaces = max_width - length - self._text = [f"{self.plain}{' ' * spaces}"] - self._length = len(self.plain) - - def _trim_spans(self) -> None: - """Remove or modify any spans that are over the end of the text.""" - max_offset = len(self.plain) - _Span = Span - self._spans[:] = [ - ( - span - if span.end < max_offset - else _Span(span.start, min(max_offset, span.end), span.style) - ) - for span in self._spans - if span.start < max_offset - ] - - def pad(self, count: int, character: str = " ") -> None: - """Pad left and right with a given number of characters. - - Args: - count (int): Width of padding. - """ - assert len(character) == 1, "Character must be a string of length 1" - if count: - pad_characters = character * count - self.plain = f"{pad_characters}{self.plain}{pad_characters}" - _Span = Span - self._spans[:] = [ - _Span(start + count, end + count, style) - for start, end, style in self._spans - ] - - def pad_left(self, count: int, character: str = " ") -> None: - """Pad the left with a given character. - - Args: - count (int): Number of characters to pad. - character (str, optional): Character to pad with. Defaults to " ". - """ - assert len(character) == 1, "Character must be a string of length 1" - if count: - self.plain = f"{character * count}{self.plain}" - _Span = Span - self._spans[:] = [ - _Span(start + count, end + count, style) - for start, end, style in self._spans - ] - - def pad_right(self, count: int, character: str = " ") -> None: - """Pad the right with a given character. - - Args: - count (int): Number of characters to pad. - character (str, optional): Character to pad with. Defaults to " ". - """ - assert len(character) == 1, "Character must be a string of length 1" - if count: - self.plain = f"{self.plain}{character * count}" - - def align(self, align: AlignMethod, width: int, character: str = " ") -> None: - """Align text to a given width. - - Args: - align (AlignMethod): One of "left", "center", or "right". - width (int): Desired width. - character (str, optional): Character to pad with. Defaults to " ". - """ - self.truncate(width) - excess_space = width - cell_len(self.plain) - if excess_space: - if align == "left": - self.pad_right(excess_space, character) - elif align == "center": - left = excess_space // 2 - self.pad_left(left, character) - self.pad_right(excess_space - left, character) - else: - self.pad_left(excess_space, character) - - def append( - self, text: Union["Text", str], style: Optional[Union[str, "Style"]] = None - ) -> "Text": - """Add text with an optional style. - - Args: - text (Union[Text, str]): A str or Text to append. - style (str, optional): A style name. Defaults to None. - - Returns: - Text: Returns self for chaining. - """ - - if not isinstance(text, (str, Text)): - raise TypeError("Only str or Text can be appended to Text") - - if len(text): - if isinstance(text, str): - text = strip_control_codes(text) - self._text.append(text) - offset = len(self) - text_length = len(text) - if style is not None: - self._spans.append(Span(offset, offset + text_length, style)) - self._length += text_length - elif isinstance(text, Text): - _Span = Span - if style is not None: - raise ValueError( - "style must not be set when appending Text instance" - ) - text_length = self._length - if text.style is not None: - self._spans.append( - _Span(text_length, text_length + len(text), text.style) - ) - self._text.append(text.plain) - self._spans.extend( - _Span(start + text_length, end + text_length, style) - for start, end, style in text._spans - ) - self._length += len(text) - return self - - def append_text(self, text: "Text") -> "Text": - """Append another Text instance. This method is more performant that Text.append, but - only works for Text. - - Returns: - Text: Returns self for chaining. - """ - _Span = Span - text_length = self._length - if text.style is not None: - self._spans.append(_Span(text_length, text_length + len(text), text.style)) - self._text.append(text.plain) - self._spans.extend( - _Span(start + text_length, end + text_length, style) - for start, end, style in text._spans - ) - self._length += len(text) - return self - - def append_tokens( - self, tokens: Iterable[Tuple[str, Optional[StyleType]]] - ) -> "Text": - """Append iterable of str and style. Style may be a Style instance or a str style definition. - - Args: - pairs (Iterable[Tuple[str, Optional[StyleType]]]): An iterable of tuples containing str content and style. - - Returns: - Text: Returns self for chaining. - """ - append_text = self._text.append - append_span = self._spans.append - _Span = Span - offset = len(self) - for content, style in tokens: - append_text(content) - if style is not None: - append_span(_Span(offset, offset + len(content), style)) - offset += len(content) - self._length = offset - return self - - def copy_styles(self, text: "Text") -> None: - """Copy styles from another Text instance. - - Args: - text (Text): A Text instance to copy styles from, must be the same length. - """ - self._spans.extend(text._spans) - - def split( - self, - separator: str = "\n", - *, - include_separator: bool = False, - allow_blank: bool = False, - ) -> Lines: - """Split rich text in to lines, preserving styles. - - Args: - separator (str, optional): String to split on. Defaults to "\\\\n". - include_separator (bool, optional): Include the separator in the lines. Defaults to False. - allow_blank (bool, optional): Return a blank line if the text ends with a separator. Defaults to False. - - Returns: - List[RichText]: A list of rich text, one per line of the original. - """ - assert separator, "separator must not be empty" - - text = self.plain - if separator not in text: - return Lines([self.copy()]) - - if include_separator: - lines = self.divide( - match.end() for match in re.finditer(re.escape(separator), text) - ) - else: - - def flatten_spans() -> Iterable[int]: - for match in re.finditer(re.escape(separator), text): - start, end = match.span() - yield start - yield end - - lines = Lines( - line for line in self.divide(flatten_spans()) if line.plain != separator - ) - - if not allow_blank and text.endswith(separator): - lines.pop() - - return lines - - def divide(self, offsets: Iterable[int]) -> Lines: - """Divide text in to a number of lines at given offsets. - - Args: - offsets (Iterable[int]): Offsets used to divide text. - - Returns: - Lines: New RichText instances between offsets. - """ - _offsets = list(offsets) - - if not _offsets: - return Lines([self.copy()]) - - text = self.plain - text_length = len(text) - divide_offsets = [0, *_offsets, text_length] - line_ranges = list(zip(divide_offsets, divide_offsets[1:])) - - style = self.style - justify = self.justify - overflow = self.overflow - _Text = Text - new_lines = Lines( - _Text( - text[start:end], - style=style, - justify=justify, - overflow=overflow, - ) - for start, end in line_ranges - ) - if not self._spans: - return new_lines - - _line_appends = [line._spans.append for line in new_lines._lines] - line_count = len(line_ranges) - _Span = Span - - for span_start, span_end, style in self._spans: - - lower_bound = 0 - upper_bound = line_count - start_line_no = (lower_bound + upper_bound) // 2 - - while True: - line_start, line_end = line_ranges[start_line_no] - if span_start < line_start: - upper_bound = start_line_no - 1 - elif span_start > line_end: - lower_bound = start_line_no + 1 - else: - break - start_line_no = (lower_bound + upper_bound) // 2 - - if span_end < line_end: - end_line_no = start_line_no - else: - end_line_no = lower_bound = start_line_no - upper_bound = line_count - - while True: - line_start, line_end = line_ranges[end_line_no] - if span_end < line_start: - upper_bound = end_line_no - 1 - elif span_end > line_end: - lower_bound = end_line_no + 1 - else: - break - end_line_no = (lower_bound + upper_bound) // 2 - - for line_no in range(start_line_no, end_line_no + 1): - line_start, line_end = line_ranges[line_no] - new_start = max(0, span_start - line_start) - new_end = min(span_end - line_start, line_end - line_start) - if new_end > new_start: - _line_appends[line_no](_Span(new_start, new_end, style)) - - return new_lines - - def right_crop(self, amount: int = 1) -> None: - """Remove a number of characters from the end of the text.""" - max_offset = len(self.plain) - amount - _Span = Span - self._spans[:] = [ - ( - span - if span.end < max_offset - else _Span(span.start, min(max_offset, span.end), span.style) - ) - for span in self._spans - if span.start < max_offset - ] - self._text = [self.plain[:-amount]] - self._length -= amount - - def wrap( - self, - console: "Console", - width: int, - *, - justify: Optional["JustifyMethod"] = None, - overflow: Optional["OverflowMethod"] = None, - tab_size: int = 8, - no_wrap: Optional[bool] = None, - ) -> Lines: - """Word wrap the text. - - Args: - console (Console): Console instance. - width (int): Number of characters per line. - emoji (bool, optional): Also render emoji code. Defaults to True. - justify (str, optional): Justify method: "default", "left", "center", "full", "right". Defaults to "default". - overflow (str, optional): Overflow method: "crop", "fold", or "ellipsis". Defaults to None. - tab_size (int, optional): Default tab size. Defaults to 8. - no_wrap (bool, optional): Disable wrapping, Defaults to False. - - Returns: - Lines: Number of lines. - """ - wrap_justify = justify or self.justify or DEFAULT_JUSTIFY - wrap_overflow = overflow or self.overflow or DEFAULT_OVERFLOW - - no_wrap = pick_bool(no_wrap, self.no_wrap, False) or overflow == "ignore" - - lines = Lines() - for line in self.split(allow_blank=True): - if "\t" in line: - line.expand_tabs(tab_size) - if no_wrap: - new_lines = Lines([line]) - else: - offsets = divide_line(str(line), width, fold=wrap_overflow == "fold") - new_lines = line.divide(offsets) - for line in new_lines: - line.rstrip_end(width) - if wrap_justify: - new_lines.justify( - console, width, justify=wrap_justify, overflow=wrap_overflow - ) - for line in new_lines: - line.truncate(width, overflow=wrap_overflow) - lines.extend(new_lines) - return lines - - def fit(self, width: int) -> Lines: - """Fit the text in to given width by chopping in to lines. - - Args: - width (int): Maximum characters in a line. - - Returns: - Lines: List of lines. - """ - lines: Lines = Lines() - append = lines.append - for line in self.split(): - line.set_length(width) - append(line) - return lines - - def detect_indentation(self) -> int: - """Auto-detect indentation of code. - - Returns: - int: Number of spaces used to indent code. - """ - - _indentations = { - len(match.group(1)) - for match in re.finditer(r"^( *)(.*)$", self.plain, flags=re.MULTILINE) - } - - try: - indentation = ( - reduce(gcd, [indent for indent in _indentations if not indent % 2]) or 1 - ) - except TypeError: - indentation = 1 - - return indentation - - def with_indent_guides( - self, - indent_size: Optional[int] = None, - *, - character: str = "│", - style: StyleType = "dim green", - ) -> "Text": - """Adds indent guide lines to text. - - Args: - indent_size (Optional[int]): Size of indentation, or None to auto detect. Defaults to None. - character (str, optional): Character to use for indentation. Defaults to "│". - style (Union[Style, str], optional): Style of indent guides. - - Returns: - Text: New text with indentation guides. - """ - - _indent_size = self.detect_indentation() if indent_size is None else indent_size - - text = self.copy() - text.expand_tabs() - indent_line = f"{character}{' ' * (_indent_size - 1)}" - - re_indent = re.compile(r"^( *)(.*)$") - new_lines: List[Text] = [] - add_line = new_lines.append - blank_lines = 0 - for line in text.split(allow_blank=True): - match = re_indent.match(line.plain) - if not match or not match.group(2): - blank_lines += 1 - continue - indent = match.group(1) - full_indents, remaining_space = divmod(len(indent), _indent_size) - new_indent = f"{indent_line * full_indents}{' ' * remaining_space}" - line.plain = new_indent + line.plain[len(new_indent) :] - line.stylize(style, 0, len(new_indent)) - if blank_lines: - new_lines.extend([Text(new_indent, style=style)] * blank_lines) - blank_lines = 0 - add_line(line) - if blank_lines: - new_lines.extend([Text("", style=style)] * blank_lines) - - new_text = text.blank_copy("\n").join(new_lines) - return new_text - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich.console import Console - - text = Text( - """\nLorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.\n""" - ) - text.highlight_words(["Lorem"], "bold") - text.highlight_words(["ipsum"], "italic") - - console = Console() - - console.rule("justify='left'") - console.print(text, style="red") - console.print() - - console.rule("justify='center'") - console.print(text, style="green", justify="center") - console.print() - - console.rule("justify='right'") - console.print(text, style="blue", justify="right") - console.print() - - console.rule("justify='full'") - console.print(text, style="magenta", justify="full") - console.print() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/urllib3/util/request.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/urllib3/util/request.py deleted file mode 100644 index 25103383ec7abc7b46fb6a6f549efa38e4abe24c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/urllib3/util/request.py +++ /dev/null @@ -1,143 +0,0 @@ -from __future__ import absolute_import - -from base64 import b64encode - -from ..exceptions import UnrewindableBodyError -from ..packages.six import b, integer_types - -# Pass as a value within ``headers`` to skip -# emitting some HTTP headers that are added automatically. -# The only headers that are supported are ``Accept-Encoding``, -# ``Host``, and ``User-Agent``. -SKIP_HEADER = "@@@SKIP_HEADER@@@" -SKIPPABLE_HEADERS = frozenset(["accept-encoding", "host", "user-agent"]) - -ACCEPT_ENCODING = "gzip,deflate" -try: - import brotli as _unused_module_brotli # noqa: F401 -except ImportError: - pass -else: - ACCEPT_ENCODING += ",br" - -_FAILEDTELL = object() - - -def make_headers( - keep_alive=None, - accept_encoding=None, - user_agent=None, - basic_auth=None, - proxy_basic_auth=None, - disable_cache=None, -): - """ - Shortcuts for generating request headers. - - :param keep_alive: - If ``True``, adds 'connection: keep-alive' header. - - :param accept_encoding: - Can be a boolean, list, or string. - ``True`` translates to 'gzip,deflate'. - List will get joined by comma. - String will be used as provided. - - :param user_agent: - String representing the user-agent you want, such as - "python-urllib3/0.6" - - :param basic_auth: - Colon-separated username:password string for 'authorization: basic ...' - auth header. - - :param proxy_basic_auth: - Colon-separated username:password string for 'proxy-authorization: basic ...' - auth header. - - :param disable_cache: - If ``True``, adds 'cache-control: no-cache' header. - - Example:: - - >>> make_headers(keep_alive=True, user_agent="Batman/1.0") - {'connection': 'keep-alive', 'user-agent': 'Batman/1.0'} - >>> make_headers(accept_encoding=True) - {'accept-encoding': 'gzip,deflate'} - """ - headers = {} - if accept_encoding: - if isinstance(accept_encoding, str): - pass - elif isinstance(accept_encoding, list): - accept_encoding = ",".join(accept_encoding) - else: - accept_encoding = ACCEPT_ENCODING - headers["accept-encoding"] = accept_encoding - - if user_agent: - headers["user-agent"] = user_agent - - if keep_alive: - headers["connection"] = "keep-alive" - - if basic_auth: - headers["authorization"] = "Basic " + b64encode(b(basic_auth)).decode("utf-8") - - if proxy_basic_auth: - headers["proxy-authorization"] = "Basic " + b64encode( - b(proxy_basic_auth) - ).decode("utf-8") - - if disable_cache: - headers["cache-control"] = "no-cache" - - return headers - - -def set_file_position(body, pos): - """ - If a position is provided, move file to that point. - Otherwise, we'll attempt to record a position for future use. - """ - if pos is not None: - rewind_body(body, pos) - elif getattr(body, "tell", None) is not None: - try: - pos = body.tell() - except (IOError, OSError): - # This differentiates from None, allowing us to catch - # a failed `tell()` later when trying to rewind the body. - pos = _FAILEDTELL - - return pos - - -def rewind_body(body, body_pos): - """ - Attempt to rewind body to a certain position. - Primarily used for request redirects and retries. - - :param body: - File-like object that supports seek. - - :param int pos: - Position to seek to in file. - """ - body_seek = getattr(body, "seek", None) - if body_seek is not None and isinstance(body_pos, integer_types): - try: - body_seek(body_pos) - except (IOError, OSError): - raise UnrewindableBodyError( - "An error occurred when rewinding request body for redirect/retry." - ) - elif body_pos is _FAILEDTELL: - raise UnrewindableBodyError( - "Unable to record file position for rewinding " - "request body during a redirect/retry." - ) - else: - raise ValueError( - "body_pos must be of type integer, instead it was %s." % type(body_pos) - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pyparsing/exceptions.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pyparsing/exceptions.py deleted file mode 100644 index 12219f124aeca6d3d7edd2621071f100c7ecd90a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pyparsing/exceptions.py +++ /dev/null @@ -1,299 +0,0 @@ -# exceptions.py - -import re -import sys -import typing - -from .util import ( - col, - line, - lineno, - _collapse_string_to_ranges, - replaced_by_pep8, -) -from .unicode import pyparsing_unicode as ppu - - -class ExceptionWordUnicode(ppu.Latin1, ppu.LatinA, ppu.LatinB, ppu.Greek, ppu.Cyrillic): - pass - - -_extract_alphanums = _collapse_string_to_ranges(ExceptionWordUnicode.alphanums) -_exception_word_extractor = re.compile("([" + _extract_alphanums + "]{1,16})|.") - - -class ParseBaseException(Exception): - """base exception class for all parsing runtime exceptions""" - - loc: int - msg: str - pstr: str - parser_element: typing.Any # "ParserElement" - args: typing.Tuple[str, int, typing.Optional[str]] - - __slots__ = ( - "loc", - "msg", - "pstr", - "parser_element", - "args", - ) - - # Performance tuning: we construct a *lot* of these, so keep this - # constructor as small and fast as possible - def __init__( - self, - pstr: str, - loc: int = 0, - msg: typing.Optional[str] = None, - elem=None, - ): - self.loc = loc - if msg is None: - self.msg = pstr - self.pstr = "" - else: - self.msg = msg - self.pstr = pstr - self.parser_element = elem - self.args = (pstr, loc, msg) - - @staticmethod - def explain_exception(exc, depth=16): - """ - Method to take an exception and translate the Python internal traceback into a list - of the pyparsing expressions that caused the exception to be raised. - - Parameters: - - - exc - exception raised during parsing (need not be a ParseException, in support - of Python exceptions that might be raised in a parse action) - - depth (default=16) - number of levels back in the stack trace to list expression - and function names; if None, the full stack trace names will be listed; if 0, only - the failing input line, marker, and exception string will be shown - - Returns a multi-line string listing the ParserElements and/or function names in the - exception's stack trace. - """ - import inspect - from .core import ParserElement - - if depth is None: - depth = sys.getrecursionlimit() - ret = [] - if isinstance(exc, ParseBaseException): - ret.append(exc.line) - ret.append(" " * (exc.column - 1) + "^") - ret.append(f"{type(exc).__name__}: {exc}") - - if depth > 0: - callers = inspect.getinnerframes(exc.__traceback__, context=depth) - seen = set() - for i, ff in enumerate(callers[-depth:]): - frm = ff[0] - - f_self = frm.f_locals.get("self", None) - if isinstance(f_self, ParserElement): - if not frm.f_code.co_name.startswith( - ("parseImpl", "_parseNoCache") - ): - continue - if id(f_self) in seen: - continue - seen.add(id(f_self)) - - self_type = type(f_self) - ret.append( - f"{self_type.__module__}.{self_type.__name__} - {f_self}" - ) - - elif f_self is not None: - self_type = type(f_self) - ret.append(f"{self_type.__module__}.{self_type.__name__}") - - else: - code = frm.f_code - if code.co_name in ("wrapper", ""): - continue - - ret.append(code.co_name) - - depth -= 1 - if not depth: - break - - return "\n".join(ret) - - @classmethod - def _from_exception(cls, pe): - """ - internal factory method to simplify creating one type of ParseException - from another - avoids having __init__ signature conflicts among subclasses - """ - return cls(pe.pstr, pe.loc, pe.msg, pe.parser_element) - - @property - def line(self) -> str: - """ - Return the line of text where the exception occurred. - """ - return line(self.loc, self.pstr) - - @property - def lineno(self) -> int: - """ - Return the 1-based line number of text where the exception occurred. - """ - return lineno(self.loc, self.pstr) - - @property - def col(self) -> int: - """ - Return the 1-based column on the line of text where the exception occurred. - """ - return col(self.loc, self.pstr) - - @property - def column(self) -> int: - """ - Return the 1-based column on the line of text where the exception occurred. - """ - return col(self.loc, self.pstr) - - # pre-PEP8 compatibility - @property - def parserElement(self): - return self.parser_element - - @parserElement.setter - def parserElement(self, elem): - self.parser_element = elem - - def __str__(self) -> str: - if self.pstr: - if self.loc >= len(self.pstr): - foundstr = ", found end of text" - else: - # pull out next word at error location - found_match = _exception_word_extractor.match(self.pstr, self.loc) - if found_match is not None: - found = found_match.group(0) - else: - found = self.pstr[self.loc : self.loc + 1] - foundstr = (", found %r" % found).replace(r"\\", "\\") - else: - foundstr = "" - return f"{self.msg}{foundstr} (at char {self.loc}), (line:{self.lineno}, col:{self.column})" - - def __repr__(self): - return str(self) - - def mark_input_line( - self, marker_string: typing.Optional[str] = None, *, markerString: str = ">!<" - ) -> str: - """ - Extracts the exception line from the input string, and marks - the location of the exception with a special symbol. - """ - markerString = marker_string if marker_string is not None else markerString - line_str = self.line - line_column = self.column - 1 - if markerString: - line_str = "".join( - (line_str[:line_column], markerString, line_str[line_column:]) - ) - return line_str.strip() - - def explain(self, depth=16) -> str: - """ - Method to translate the Python internal traceback into a list - of the pyparsing expressions that caused the exception to be raised. - - Parameters: - - - depth (default=16) - number of levels back in the stack trace to list expression - and function names; if None, the full stack trace names will be listed; if 0, only - the failing input line, marker, and exception string will be shown - - Returns a multi-line string listing the ParserElements and/or function names in the - exception's stack trace. - - Example:: - - expr = pp.Word(pp.nums) * 3 - try: - expr.parse_string("123 456 A789") - except pp.ParseException as pe: - print(pe.explain(depth=0)) - - prints:: - - 123 456 A789 - ^ - ParseException: Expected W:(0-9), found 'A' (at char 8), (line:1, col:9) - - Note: the diagnostic output will include string representations of the expressions - that failed to parse. These representations will be more helpful if you use `set_name` to - give identifiable names to your expressions. Otherwise they will use the default string - forms, which may be cryptic to read. - - Note: pyparsing's default truncation of exception tracebacks may also truncate the - stack of expressions that are displayed in the ``explain`` output. To get the full listing - of parser expressions, you may have to set ``ParserElement.verbose_stacktrace = True`` - """ - return self.explain_exception(self, depth) - - # fmt: off - @replaced_by_pep8(mark_input_line) - def markInputline(self): ... - # fmt: on - - -class ParseException(ParseBaseException): - """ - Exception thrown when a parse expression doesn't match the input string - - Example:: - - try: - Word(nums).set_name("integer").parse_string("ABC") - except ParseException as pe: - print(pe) - print("column: {}".format(pe.column)) - - prints:: - - Expected integer (at char 0), (line:1, col:1) - column: 1 - - """ - - -class ParseFatalException(ParseBaseException): - """ - User-throwable exception thrown when inconsistent parse content - is found; stops all parsing immediately - """ - - -class ParseSyntaxException(ParseFatalException): - """ - Just like :class:`ParseFatalException`, but thrown internally - when an :class:`ErrorStop` ('-' operator) indicates - that parsing is to stop immediately because an unbacktrackable - syntax error has been found. - """ - - -class RecursiveGrammarException(Exception): - """ - Exception thrown by :class:`ParserElement.validate` if the - grammar could be left-recursive; parser may need to enable - left recursion using :class:`ParserElement.enable_left_recursion` - """ - - def __init__(self, parseElementList): - self.parseElementTrace = parseElementList - - def __str__(self) -> str: - return f"RecursiveGrammarException: {self.parseElementTrace}" diff --git a/spaces/pysunny/gradio-pysunny/README.md b/spaces/pysunny/gradio-pysunny/README.md deleted file mode 100644 index 1bb9a9a609058140694c24146fa61c67c5a19c53..0000000000000000000000000000000000000000 --- a/spaces/pysunny/gradio-pysunny/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gradio Pysunny -emoji: 🐢 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/qingxu98/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex b/spaces/qingxu98/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex deleted file mode 100644 index c82be6242cc9d26203360e90d3ac9184ef6ad842..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/crazy_functions/test_project/latex/attention/model_architecture.tex +++ /dev/null @@ -1,155 +0,0 @@ - -\begin{figure} - \centering - \includegraphics[scale=0.6]{Figures/ModalNet-21} - \caption{The Transformer - model architecture.} - \label{fig:model-arch} -\end{figure} - -% Although the primary workhorse of our model is attention, -%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail. - -Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next. - -The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively. - -\subsection{Encoder and Decoder Stacks} - -\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$. - -\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$. - -% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail. - -\subsection{Attention} \label{sec:attention} -An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. - -\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod} - -% \begin{figure} -% \centering -% \includegraphics[scale=0.6]{Figures/ModalNet-19} -% \caption{Scaled Dot-Product Attention.} -% \label{fig:multi-head-att} -% \end{figure} - -We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values. - -In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as: - -\begin{equation} - \mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V -\end{equation} - -The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. - -%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients. - -% Already described in the subsequent section -%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$. - -%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model. - -While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$. - - -%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$. - - -\subsubsection{Multi-Head Attention} \label{sec:multihead} - -\begin{figure} -\begin{minipage}[t]{0.5\textwidth} - \centering - Scaled Dot-Product Attention \\ - \vspace{0.5cm} - \includegraphics[scale=0.6]{Figures/ModalNet-19} -\end{minipage} -\begin{minipage}[t]{0.5\textwidth} - \centering - Multi-Head Attention \\ - \vspace{0.1cm} - \includegraphics[scale=0.6]{Figures/ModalNet-20} -\end{minipage} - - - % \centering - - \caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.} - \label{fig:multi-head-att} -\end{figure} - -Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively. -On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}. - -Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. - -\begin{align*} - \mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\ -% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\ - \text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\ -\end{align*} - -Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$. - - -%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation. - -In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$. -Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. - -\subsubsection{Applications of Attention in our Model} - -The Transformer uses multi-head attention in three different ways: -\begin{itemize} - \item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}. - - \item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. - - \item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}. - -\end{itemize} - -\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn} - -In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. - -\begin{equation} - \mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2 -\end{equation} - -While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$. - - - -%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention. - -%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention. - - -%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as -%\begin{equation*} \label{eq:attention} -% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq). -%\end{equation*} -%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$. - -%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$. -%\marginpar{} - -\subsection{Embeddings and Softmax} -Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$. - - -\subsection{Positional Encoding} -Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}. - -In this work, we use sine and cosine functions of different frequencies: - -\begin{align*} - PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\ - PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel}) -\end{align*} - -where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$. - -We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. diff --git a/spaces/quidiaMuxgu/Expedit-SAM/ABCD Any Body Can Dance 3 Hindi Song !NEW! Free Download.md b/spaces/quidiaMuxgu/Expedit-SAM/ABCD Any Body Can Dance 3 Hindi Song !NEW! Free Download.md deleted file mode 100644 index 8c7e97389b7439faa3ba234c90b63564d7127d78..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/ABCD Any Body Can Dance 3 Hindi Song !NEW! Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      ABCD Any Body Can Dance 3 Hindi Song Free Download


      DOWNLOAD ••• https://geags.com/2uCrqv



      - -ABCD - Any Body Can Dance Full Movie Download Utorrent Free Download abcd ... songs download, abcd any body dance movie songs, abcd any body dance ... ABCD 3 (2018) Hindi Movie Title Teaser HD BDMusic25 run . 1fdad05405
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Atomic Email Verifier Keygen ((FULL)) 83.md b/spaces/quidiaMuxgu/Expedit-SAM/Atomic Email Verifier Keygen ((FULL)) 83.md deleted file mode 100644 index 248ca60eb9a8b9a3b0e05d2d72920c7b8458225a..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Atomic Email Verifier Keygen ((FULL)) 83.md +++ /dev/null @@ -1,6 +0,0 @@ -

      atomic email verifier keygen 83


      DOWNLOADhttps://geags.com/2uCsTm



      - -Development (OCOD), 800-835-4709 or 240-402-8010, or email ... 83. 84. Repackager is defined in section 581(16) of the FD&C Act as “a ... identifier (composed of the NDC and a unique alphanumeric serial number), lot number, and ... agreement with such Commission under section 274 of the Atomic Energy Act of 1954 ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Heroes Of The Pacific Pc Game Free Download [NEW].md b/spaces/quidiaMuxgu/Expedit-SAM/Heroes Of The Pacific Pc Game Free Download [NEW].md deleted file mode 100644 index 39970ad6144108c4fbf9023cf30e88fd38e9c335..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Heroes Of The Pacific Pc Game Free Download [NEW].md +++ /dev/null @@ -1,6 +0,0 @@ -

      heroes of the pacific pc game free download


      DOWNLOADhttps://geags.com/2uCrEf



      - -Jump Force brings some of the most iconic heroes and villains from Weekly ... Dyess AFB Airmen arrive in Indo-Pacific for Bomber Task Force, integrate with ... Jump Force Free Download 2019 Anime PC Game Fitgirl Repack For Mac OS X ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/IDM 6.35 CRACK BUILD 8 With ACTIVATION CODE.md b/spaces/quidiaMuxgu/Expedit-SAM/IDM 6.35 CRACK BUILD 8 With ACTIVATION CODE.md deleted file mode 100644 index 5ae94a14bd2661fac183fdf6bc9f869c02f575f0..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/IDM 6.35 CRACK BUILD 8 With ACTIVATION CODE.md +++ /dev/null @@ -1,25 +0,0 @@ -
      -```html -

      How to Download and Install IDM 6.35 Crack Build 8 with Activation Code

      -

      IDM (Internet Download Manager) is a popular software that allows you to download files from the internet faster and more efficiently. However, IDM is not free and you need to purchase a license to use it. If you don't want to spend money on IDM, you can try using IDM 6.35 Crack Build 8 with Activation Code, which is a modified version of IDM that bypasses the registration process and unlocks all the features of IDM.

      -

      IDM 6.35 CRACK BUILD 8 With ACTIVATION CODE


      Download > https://geags.com/2uCqym



      -

      In this article, we will show you how to download and install IDM 6.35 Crack Build 8 with Activation Code on your Windows PC. Follow the steps below carefully and enjoy using IDM for free.

      -

      Step 1: Download IDM 6.35 Crack Build 8 with Activation Code

      -

      The first step is to download the IDM 6.35 Crack Build 8 with Activation Code file from a reliable source. You can use the link below to download it:

      -Download IDM 6.35 Crack Build 8 with Activation Code -

      Make sure you have a good antivirus program on your PC before downloading any files from the internet. Scan the downloaded file for any viruses or malware before opening it.

      -

      Step 2: Extract the ZIP file

      -

      The next step is to extract the ZIP file that contains the IDM 6.35 Crack Build 8 with Activation Code files. You can use any software that can extract ZIP files, such as WinRAR or 7-Zip. Right-click on the downloaded file and select "Extract Here" or "Extract to idm-6-35-crack-build-8-with-activation-code". You will see a folder named "idm-6-35-crack-build-8-with-activation-code" in the same location as the ZIP file.

      -

      -

      Step 3: Install IDM

      -

      The third step is to install IDM on your PC. Open the folder "idm-6-35-crack-build-8-with-activation-code" and double-click on the file "idman635build8.exe". This will launch the IDM setup wizard. Follow the instructions on the screen and complete the installation process. Do not launch IDM after the installation is finished.

      -

      Step 4: Copy and Paste the Crack Files

      -

      The fourth step is to copy and paste the crack files into the IDM installation folder. Go back to the folder "idm-6-35-crack-build-8-with-activation-code" and open the folder "Crack". You will see two files: "IDMan.exe" and "IDMGrHlp.exe". Copy these two files and go to the IDM installation folder, which is usually located at "C:\Program Files (x86)\Internet Download Manager". Paste the two files in this folder and replace the existing files when prompted.

      -

      Step 5: Run the Activation Code Generator

      -

      The final step is to run the activation code generator and get your activation code for IDM. Go back to the folder "idm-6-35-crack-build-8-with-activation-code" and open the folder "Activation Code Generator". You will see a file named "IDM Activation Code Generator.exe". Double-click on this file and wait for it to generate an activation code for you. Copy this activation code and keep it somewhere safe.

      -

      Step 6: Activate IDM

      -

      Now you are ready to activate IDM and use it for free. Launch IDM from your desktop or start menu and you will see a registration window pop up. Enter your name, email address, and paste the activation code that you copied from the previous step. Click on "OK" and you will see a message saying that your IDM has been registered successfully.

      -

      Congratulations! You have successfully downloaded and installed IDM 6.35 Crack Build 8 with Activation Code on your PC. You can now enjoy downloading files from the internet faster and more efficiently with IDM.

      -```

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Lord Rings Conquest Full Version Download ((LINK)).md b/spaces/quidiaMuxgu/Expedit-SAM/Lord Rings Conquest Full Version Download ((LINK)).md deleted file mode 100644 index 0cc2abcf324b8ff12a86b92d88608153d45ba6d2..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Lord Rings Conquest Full Version Download ((LINK)).md +++ /dev/null @@ -1,26 +0,0 @@ -
      -

      How to Download and Play The Lord of the Rings: Conquest on PC

      -

      If you are a fan of The Lord of the Rings movies and want to experience the epic battles from both sides of the conflict, you might be interested in The Lord of the Rings: Conquest, a 2009 action game developed by Pandemic Studios and published by Electronic Arts. In this game, you can choose to fight for the forces of good or evil, using a variety of characters and weapons from the films. You can also play online with up to 16 players, or offline with up to 4 players in split-screen mode.

      -

      lord rings conquest full version download


      Download ✵✵✵ https://geags.com/2uCrN8



      -

      However, if you don't have a console or a disc to play the game, you might be wondering how to download and play The Lord of the Rings: Conquest on PC. Fortunately, there are some websites that offer the game for free download, as well as some steps you need to follow to install and run it. Here is a guide on how to do it:

      -
        -
      1. Go to https://oldgamesdownload.com/the-lord-of-the-rings-conquest/, a website that provides old games for free download[^1^]. Scroll down and click on the green button that says "Download The Lord of the Rings: Conquest". This will start downloading a zip file that contains the game files.
      2. -
      3. Once the download is complete, extract the zip file using a program like WinRAR or 7-Zip. You will see a folder called "Game Files" that contains an ISO file named "OGD_The_Lord_of_the_Rings_Conquest.iso". This is a disc image of the game that you need to mount on your PC.
      4. -
      5. To mount the ISO file, you can use a program like Daemon Tools or Virtual CloneDrive. Right-click on the ISO file and select "Mount" from the menu. This will create a virtual drive on your PC that acts like a CD-ROM.
      6. -
      7. Open the virtual drive and run "setup.exe" to install the game. Follow the instructions on the screen and choose a destination folder for the game. The installation may take some time depending on your PC's specifications.
      8. -
      9. Once the installation is done, you can play the game by running "lotrconquest.exe" from the destination folder. You may need to adjust some settings like resolution, graphics quality, and sound options before playing.
      10. -
      -

      Congratulations! You have successfully downloaded and installed The Lord of the Rings: Conquest on your PC. Enjoy playing this exciting action game and relive the epic battles from The Lord of the Rings movies.

      - -

      If you want to learn more about The Lord of the Rings: Conquest, here are some additional information and tips:

      -

      -
        -
      • The game has two campaigns: one for the forces of good and one for the forces of evil. The good campaign follows the events of the movies, while the evil campaign imagines what would happen if Sauron had recovered the One Ring and conquered Middle-earth. You can play each campaign in any order, and switch between different characters at any time.
      • -
      • The game features four classes of characters: Warrior, Archer, Mage, and Scout. Each class has its own abilities and weapons, and can be customized with different skins and upgrades. You can also play as some of the heroes and villains from the movies, such as Aragorn, Gandalf, Legolas, Gimli, Saruman, Witch-king, Balrog, and more.
      • -
      • The game has several modes of gameplay, including instant action, conquest, capture the ring, hero deathmatch, and hero team deathmatch. You can play these modes online with up to 16 players, or offline with up to 4 players in split-screen. You can also play a co-op campaign with another player online or offline.
      • -
      • The game has a variety of maps and locations from the movies, such as Helm's Deep, Minas Tirith, Moria, Pelennor Fields, Weathertop, Isengard, Mount Doom, and more. Each map has its own objectives and challenges, as well as environmental hazards and creatures like wargs, ents, oliphaunts, cave-trolls, etc.
      • -
      • The game has a lot of references and Easter eggs from the movies and the books by J.R.R. Tolkien. For example, you can find Tom Bombadil's house in the Old Forest map, or hear Gollum's voice in the Mines of Moria map. You can also unlock some bonus content by completing certain achievements or finding hidden items.
      • -
      -

      The Lord of the Rings: Conquest is a fun and immersive game that lets you experience the world of The Lord of the Rings in a new way. Whether you want to fight for good or evil, you will find plenty of action and adventure in this game. Download it today and enjoy!

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/MLA Movie Download Hd Kickass.md b/spaces/quidiaMuxgu/Expedit-SAM/MLA Movie Download Hd Kickass.md deleted file mode 100644 index 8277579ea274f25fa0dbc93f0282d103bc81a08b..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/MLA Movie Download Hd Kickass.md +++ /dev/null @@ -1,24 +0,0 @@ -
      -

      How to Download MLA Movie in HD Quality from Kickass Torrents

      -

      MLA is a 2018 Telugu political drama film starring Nandamuri Kalyan Ram and Kajal Aggarwal in the lead roles. The film was directed by Upendra Madhav and produced by Kiran Reddy and Bharath Chowdary. The film received mixed reviews from critics and audiences, but was praised for its performances and music.

      -

      MLA Movie Download Hd Kickass


      Download Zip https://geags.com/2uCr2R



      -

      If you missed the chance to watch MLA in theatres or want to enjoy it again in high definition, you can download it from Kickass Torrents, one of the most popular and reliable torrent sites on the internet. Here are the steps to download MLA movie in HD quality from Kickass Torrents:

      -
        -
      1. Visit the official website of Kickass Torrents or use a proxy or VPN service if the site is blocked in your region.
      2. -
      3. Search for "MLA movie" in the search bar and filter the results by quality, size, seeds, peers, etc.
      4. -
      5. Select the torrent file that has the highest number of seeds and peers and matches your preferred quality and size.
      6. -
      7. Download the torrent file or copy the magnet link and paste it in your torrent client.
      8. -
      9. Wait for the download to complete and enjoy watching MLA movie in HD quality.
      10. -
      -

      Note: Downloading movies from torrent sites may be illegal in some countries and may expose you to malware and viruses. Use a VPN service and antivirus software to protect your privacy and security. We do not endorse or promote piracy in any way.

      - -

      MLA movie is a political drama that revolves around Kalyan Ram, a young and honest MLA who fights against corruption and injustice in his constituency. He falls in love with Kajal Aggarwal, the daughter of his political rival and a corrupt minister. How he overcomes the obstacles and wins her heart forms the crux of the story.

      -

      The film has a runtime of 2 hours and 7 minutes and was released on March 23, 2018. The film was rated U/A by the Central Board of Film Certification and had a budget of ₹10 crore. The film collected ₹25 crore at the worldwide box office and was declared a hit.

      -

      The film has a rating of 5.6 out of 10 on IMDb and 2.5 out of 5 on Times of India. The film was praised for its action sequences, comedy scenes, and songs, but criticized for its predictable plot, weak screenplay, and lack of originality. The film was also compared to other Telugu films with similar themes and characters.

      - -

      The film has a soundtrack composed by Mani Sharma, who returned to Telugu cinema after a gap of four years. The soundtrack consists of five songs, written by Ramajogayya Sastry and Kasarla Shyam. The songs were well received by the listeners and became chartbusters. The song "Most Wanted Abbayi" was especially popular and was sung by Mamta Sharma and Mani Sharma.

      -

      -

      The film was shot in various locations in India and abroad, including Hyderabad, Vikarabad, Kurnool, Bengaluru, and Portugal. The film was shot by Prasad Murella, who used a Red Epic camera for the cinematography. The film was edited by Thammi Raju and had a production design by Kiran Kumar Manne.

      -

      The film was distributed by Blue Planet Entertainments and People Media Factory. The film had a grand pre-release event on March 21, 2018, at Hyderabad, where the trailer and songs were launched. The film had a worldwide release on March 23, 2018, in over 1000 screens. The film received mixed reviews from critics and audiences, but performed well at the box office.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/layers_123812KB .py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/layers_123812KB .py deleted file mode 100644 index 4fc1b5cb85a3327f60cbb9f5deffbeeaaac516ad..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/layers_123812KB .py +++ /dev/null @@ -1,118 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/mtcnn/mtcnn_pytorch/src/box_utils.py b/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/mtcnn/mtcnn_pytorch/src/box_utils.py deleted file mode 100644 index 1e8081b73639a7d70e4391b3d45417569550ddc6..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/mtcnn/mtcnn_pytorch/src/box_utils.py +++ /dev/null @@ -1,238 +0,0 @@ -import numpy as np -from PIL import Image - - -def nms(boxes, overlap_threshold=0.5, mode='union'): - """Non-maximum suppression. - - Arguments: - boxes: a float numpy array of shape [n, 5], - where each row is (xmin, ymin, xmax, ymax, score). - overlap_threshold: a float number. - mode: 'union' or 'min'. - - Returns: - list with indices of the selected boxes - """ - - # if there are no boxes, return the empty list - if len(boxes) == 0: - return [] - - # list of picked indices - pick = [] - - # grab the coordinates of the bounding boxes - x1, y1, x2, y2, score = [boxes[:, i] for i in range(5)] - - area = (x2 - x1 + 1.0) * (y2 - y1 + 1.0) - ids = np.argsort(score) # in increasing order - - while len(ids) > 0: - - # grab index of the largest value - last = len(ids) - 1 - i = ids[last] - pick.append(i) - - # compute intersections - # of the box with the largest score - # with the rest of boxes - - # left top corner of intersection boxes - ix1 = np.maximum(x1[i], x1[ids[:last]]) - iy1 = np.maximum(y1[i], y1[ids[:last]]) - - # right bottom corner of intersection boxes - ix2 = np.minimum(x2[i], x2[ids[:last]]) - iy2 = np.minimum(y2[i], y2[ids[:last]]) - - # width and height of intersection boxes - w = np.maximum(0.0, ix2 - ix1 + 1.0) - h = np.maximum(0.0, iy2 - iy1 + 1.0) - - # intersections' areas - inter = w * h - if mode == 'min': - overlap = inter / np.minimum(area[i], area[ids[:last]]) - elif mode == 'union': - # intersection over union (IoU) - overlap = inter / (area[i] + area[ids[:last]] - inter) - - # delete all boxes where overlap is too big - ids = np.delete( - ids, - np.concatenate([[last], np.where(overlap > overlap_threshold)[0]]) - ) - - return pick - - -def convert_to_square(bboxes): - """Convert bounding boxes to a square form. - - Arguments: - bboxes: a float numpy array of shape [n, 5]. - - Returns: - a float numpy array of shape [n, 5], - squared bounding boxes. - """ - - square_bboxes = np.zeros_like(bboxes) - x1, y1, x2, y2 = [bboxes[:, i] for i in range(4)] - h = y2 - y1 + 1.0 - w = x2 - x1 + 1.0 - max_side = np.maximum(h, w) - square_bboxes[:, 0] = x1 + w * 0.5 - max_side * 0.5 - square_bboxes[:, 1] = y1 + h * 0.5 - max_side * 0.5 - square_bboxes[:, 2] = square_bboxes[:, 0] + max_side - 1.0 - square_bboxes[:, 3] = square_bboxes[:, 1] + max_side - 1.0 - return square_bboxes - - -def calibrate_box(bboxes, offsets): - """Transform bounding boxes to be more like true bounding boxes. - 'offsets' is one of the outputs of the nets. - - Arguments: - bboxes: a float numpy array of shape [n, 5]. - offsets: a float numpy array of shape [n, 4]. - - Returns: - a float numpy array of shape [n, 5]. - """ - x1, y1, x2, y2 = [bboxes[:, i] for i in range(4)] - w = x2 - x1 + 1.0 - h = y2 - y1 + 1.0 - w = np.expand_dims(w, 1) - h = np.expand_dims(h, 1) - - # this is what happening here: - # tx1, ty1, tx2, ty2 = [offsets[:, i] for i in range(4)] - # x1_true = x1 + tx1*w - # y1_true = y1 + ty1*h - # x2_true = x2 + tx2*w - # y2_true = y2 + ty2*h - # below is just more compact form of this - - # are offsets always such that - # x1 < x2 and y1 < y2 ? - - translation = np.hstack([w, h, w, h]) * offsets - bboxes[:, 0:4] = bboxes[:, 0:4] + translation - return bboxes - - -def get_image_boxes(bounding_boxes, img, size=24): - """Cut out boxes from the image. - - Arguments: - bounding_boxes: a float numpy array of shape [n, 5]. - img: an instance of PIL.Image. - size: an integer, size of cutouts. - - Returns: - a float numpy array of shape [n, 3, size, size]. - """ - - num_boxes = len(bounding_boxes) - width, height = img.size - - [dy, edy, dx, edx, y, ey, x, ex, w, h] = correct_bboxes(bounding_boxes, width, height) - img_boxes = np.zeros((num_boxes, 3, size, size), 'float32') - - for i in range(num_boxes): - img_box = np.zeros((h[i], w[i], 3), 'uint8') - - img_array = np.asarray(img, 'uint8') - img_box[dy[i]:(edy[i] + 1), dx[i]:(edx[i] + 1), :] = \ - img_array[y[i]:(ey[i] + 1), x[i]:(ex[i] + 1), :] - - # resize - img_box = Image.fromarray(img_box) - img_box = img_box.resize((size, size), Image.BILINEAR) - img_box = np.asarray(img_box, 'float32') - - img_boxes[i, :, :, :] = _preprocess(img_box) - - return img_boxes - - -def correct_bboxes(bboxes, width, height): - """Crop boxes that are too big and get coordinates - with respect to cutouts. - - Arguments: - bboxes: a float numpy array of shape [n, 5], - where each row is (xmin, ymin, xmax, ymax, score). - width: a float number. - height: a float number. - - Returns: - dy, dx, edy, edx: a int numpy arrays of shape [n], - coordinates of the boxes with respect to the cutouts. - y, x, ey, ex: a int numpy arrays of shape [n], - corrected ymin, xmin, ymax, xmax. - h, w: a int numpy arrays of shape [n], - just heights and widths of boxes. - - in the following order: - [dy, edy, dx, edx, y, ey, x, ex, w, h]. - """ - - x1, y1, x2, y2 = [bboxes[:, i] for i in range(4)] - w, h = x2 - x1 + 1.0, y2 - y1 + 1.0 - num_boxes = bboxes.shape[0] - - # 'e' stands for end - # (x, y) -> (ex, ey) - x, y, ex, ey = x1, y1, x2, y2 - - # we need to cut out a box from the image. - # (x, y, ex, ey) are corrected coordinates of the box - # in the image. - # (dx, dy, edx, edy) are coordinates of the box in the cutout - # from the image. - dx, dy = np.zeros((num_boxes,)), np.zeros((num_boxes,)) - edx, edy = w.copy() - 1.0, h.copy() - 1.0 - - # if box's bottom right corner is too far right - ind = np.where(ex > width - 1.0)[0] - edx[ind] = w[ind] + width - 2.0 - ex[ind] - ex[ind] = width - 1.0 - - # if box's bottom right corner is too low - ind = np.where(ey > height - 1.0)[0] - edy[ind] = h[ind] + height - 2.0 - ey[ind] - ey[ind] = height - 1.0 - - # if box's top left corner is too far left - ind = np.where(x < 0.0)[0] - dx[ind] = 0.0 - x[ind] - x[ind] = 0.0 - - # if box's top left corner is too high - ind = np.where(y < 0.0)[0] - dy[ind] = 0.0 - y[ind] - y[ind] = 0.0 - - return_list = [dy, edy, dx, edx, y, ey, x, ex, w, h] - return_list = [i.astype('int32') for i in return_list] - - return return_list - - -def _preprocess(img): - """Preprocessing step before feeding the network. - - Arguments: - img: a float numpy array of shape [h, w, c]. - - Returns: - a float numpy array of shape [1, c, h, w]. - """ - img = img.transpose((2, 0, 1)) - img = np.expand_dims(img, 0) - img = (img - 127.5) * 0.0078125 - return img diff --git a/spaces/radames/diffusers-classifier-labeling/README.md b/spaces/radames/diffusers-classifier-labeling/README.md deleted file mode 100644 index 6930fc46e27e80268232940384a5b19673667bce..0000000000000000000000000000000000000000 --- a/spaces/radames/diffusers-classifier-labeling/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Diffusers Classifier Labeling -emoji: 🌖 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.20.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Black Hawk Down Full Movie In Hindi Dubbed.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Black Hawk Down Full Movie In Hindi Dubbed.md deleted file mode 100644 index bc4ff7584cacec88fe7d6f3e556f2930055fc8d7..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Black Hawk Down Full Movie In Hindi Dubbed.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Black Hawk Down Full Movie In Hindi Dubbed


      Download Zip ⇒⇒⇒ https://urlgoal.com/2uCJT9



      - -Black Hawk Down (2001) - Hollywood - watch hd movie newly available worth watching online straming free. Geo Urdu Movies. 1fdad05405
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Simpack Dll Skyrim Dlc.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Simpack Dll Skyrim Dlc.md deleted file mode 100644 index 7d9d341e172f0577049acd08732f8e3d720f701a..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Simpack Dll Skyrim Dlc.md +++ /dev/null @@ -1,11 +0,0 @@ -
      -

      skyrim is an epic fantasy role-playing video game developed by bethesda game studios and published by bethesda softworks. for the first time in the history of the series, skyrim is being developed for next-generation consoles. it was first released on xbox 360 on november 11, 2011. for the pc version, the game is being released by

      -

      medievalcloaks is one of the most beautiful medieval games you can find. it is free and you do not need to download anything. if you feel that you should download the demo version, you can do it if you type this address:. what this demo version does is a game screen without any background and a very low black

      -

      download simpack dll skyrim dlc


      Downloadhttps://urlgoal.com/2uCKvo



      -

      an exceptional and famous app named free game downloads offers you the opportunity to get a free download of diablo 3. this free game is neither a demo version nor a crack. it is a full, legit version you can download from this site. the game is just waiting to appear on your screen. just download it from here.

      -

      the present invention relates to an apparatus for mounting an electrical component, in particular the transducer in a vibration or acoustic vibration damping supporting part of a motor vehicle, and an associated method.

      -

      to encourage people to log off from the internet, when they have finished using it, it has been proposed in ep-a1-1 288 640 to write an information message on a display screen at the exit from the internet. this provides a confirmation when a user has really exited.

      -

      -

      along with the login screen, the computer also creates a system log for the session and this is saved on a file server. the system log includes data about the user and the internet accessed and any programs and attachments which were used.
      once the user logs out the computer, this system log is transferred to a secure cloud drive to be preserved.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Gta Iv 0.1.0.0 Crack HOT!.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Gta Iv 0.1.0.0 Crack HOT!.md deleted file mode 100644 index 2e110e2a3c157cae220dd8a8912d02b2845b154d..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Gta Iv 0.1.0.0 Crack HOT!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      gta iv 0.1.0.0 crack


      DOWNLOAD ☆☆☆ https://urlgoal.com/2uCMJg



      - - 4fefd39f24
      -
      -
      -

      diff --git a/spaces/remzicam/voicebot_german/app.py b/spaces/remzicam/voicebot_german/app.py deleted file mode 100644 index a37fe8b702535e2dbbe6d87840fcc5010b686d28..0000000000000000000000000000000000000000 --- a/spaces/remzicam/voicebot_german/app.py +++ /dev/null @@ -1,57 +0,0 @@ -"""Deploying AI Voice Chatbot Gradio App.""" -from gradio import Audio, Interface, Textbox -from typing import Tuple - -from utils import (TextGenerationPipeline, from_en_translation, - html_audio_autoplay, stt, to_en_translation, tts, - tts_to_bytesio) - -max_answer_length = 100 -desired_language = "de" -response_generator_pipe = TextGenerationPipeline(max_length=max_answer_length) - - -def main(audio: object) -> Tuple[str, str, str, object]: - """Calls functions for deploying gradio app. - - It responds both verbally and in text - by taking voice input from user. - - Args: - audio (object): recorded speech of user - - Returns: - tuple containing - - - user_speech_text (str) : recognized speech - - bot_response_de (str) : translated answer of bot - - bot_response_en (str) : bot's original answer - - html (object) : autoplayer for bot's speech - """ - user_speech_text = stt(audio, desired_language) - tranlated_text = to_en_translation(user_speech_text, desired_language) - bot_response_en = response_generator_pipe(tranlated_text) - bot_response_de = from_en_translation(bot_response_en, desired_language) - bot_voice = tts(bot_response_de, desired_language) - bot_voice_bytes = tts_to_bytesio(bot_voice) - html = html_audio_autoplay(bot_voice_bytes) - return user_speech_text, bot_response_de, bot_response_en, html - - -Interface( - fn=main, - inputs=[ - Audio( - source="microphone", - type="filepath", - ), - ], - outputs=[ - Textbox(label="You said: "), - Textbox(label="AI said: "), - Textbox(label="AI said (English): "), - "html", - ], - live=True, - allow_flagging="never", -).launch(debug=True) diff --git a/spaces/riccorl/relik-entity-linking/relik/reader/pytorch_modules/optim/adamw_with_warmup.py b/spaces/riccorl/relik-entity-linking/relik/reader/pytorch_modules/optim/adamw_with_warmup.py deleted file mode 100644 index dfaecc4ca3d1c366f25962db4d0024a5b986fd50..0000000000000000000000000000000000000000 --- a/spaces/riccorl/relik-entity-linking/relik/reader/pytorch_modules/optim/adamw_with_warmup.py +++ /dev/null @@ -1,66 +0,0 @@ -from typing import List - -import torch -import transformers -from torch.optim import AdamW - - -class AdamWWithWarmupOptimizer: - def __init__( - self, - lr: float, - warmup_steps: int, - total_steps: int, - weight_decay: float, - no_decay_params: List[str], - ): - self.lr = lr - self.warmup_steps = warmup_steps - self.total_steps = total_steps - self.weight_decay = weight_decay - self.no_decay_params = no_decay_params - - def group_params(self, module: torch.nn.Module) -> list: - if self.no_decay_params is not None: - optimizer_grouped_parameters = [ - { - "params": [ - p - for n, p in module.named_parameters() - if not any(nd in n for nd in self.no_decay_params) - ], - "weight_decay": self.weight_decay, - }, - { - "params": [ - p - for n, p in module.named_parameters() - if any(nd in n for nd in self.no_decay_params) - ], - "weight_decay": 0.0, - }, - ] - - else: - optimizer_grouped_parameters = [ - {"params": module.parameters(), "weight_decay": self.weight_decay} - ] - - return optimizer_grouped_parameters - - def __call__(self, module: torch.nn.Module): - optimizer_grouped_parameters = self.group_params(module) - optimizer = AdamW( - optimizer_grouped_parameters, lr=self.lr, weight_decay=self.weight_decay - ) - scheduler = transformers.get_linear_schedule_with_warmup( - optimizer, self.warmup_steps, self.total_steps - ) - return { - "optimizer": optimizer, - "lr_scheduler": { - "scheduler": scheduler, - "interval": "step", - "frequency": 1, - }, - } diff --git a/spaces/robin0307/MMOCR/configs/textrecog/nrtr/nrtr_r31_1by16_1by8_academic.py b/spaces/robin0307/MMOCR/configs/textrecog/nrtr/nrtr_r31_1by16_1by8_academic.py deleted file mode 100644 index b7adc0d30cda5e5556821ff941d6e00dcd3b4ba7..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/textrecog/nrtr/nrtr_r31_1by16_1by8_academic.py +++ /dev/null @@ -1,48 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_6e.py', - '../../_base_/recog_pipelines/nrtr_pipeline.py', - '../../_base_/recog_datasets/ST_MJ_train.py', - '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -label_convertor = dict( - type='AttnConvertor', dict_type='DICT90', with_unknown=True) - -model = dict( - type='NRTR', - backbone=dict( - type='ResNet31OCR', - layers=[1, 2, 5, 3], - channels=[32, 64, 128, 256, 512, 512], - stage4_pool_cfg=dict(kernel_size=(2, 1), stride=(2, 1)), - last_stage_pool=True), - encoder=dict(type='NRTREncoder'), - decoder=dict(type='NRTRDecoder'), - loss=dict(type='TFLoss'), - label_convertor=label_convertor, - max_seq_len=40) - -data = dict( - samples_per_gpu=128, - workers_per_gpu=4, - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/scnet.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/scnet.py deleted file mode 100644 index a361d81c3aa62de0ff98b303cb5e0b838b8045fa..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/scnet.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .cascade_rcnn import CascadeRCNN - - -@DETECTORS.register_module() -class SCNet(CascadeRCNN): - """Implementation of `SCNet `_""" - - def __init__(self, **kwargs): - super(SCNet, self).__init__(**kwargs) diff --git a/spaces/rohanshaw/Bard/README.md b/spaces/rohanshaw/Bard/README.md deleted file mode 100644 index 1d4ee71d771a8d6356c9103e7354b5d0c756b601..0000000000000000000000000000000000000000 --- a/spaces/rohanshaw/Bard/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bard -emoji: 💬 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Apowersoft ApowerCompress 3.1.0 Activation Download.md b/spaces/rorallitri/biomedical-language-models/logs/Apowersoft ApowerCompress 3.1.0 Activation Download.md deleted file mode 100644 index 3397fe4925bca0cda86e0d0ad22c8f399eb7d6d6..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Apowersoft ApowerCompress 3.1.0 Activation Download.md +++ /dev/null @@ -1,120 +0,0 @@ -
      -

      How to Download and Activate Apowersoft ApowerCompress 3.1.0

      - -

      If you are looking for a file compression software that is easy to use, fast, and reliable, you should try Apowersoft ApowerCompress 3.1.0. This software can compress pictures, videos, and PDFs in one click, with high compression rate and quality. It also supports various formats, resolutions, and compression types. In this article, we will show you how to download and activate Apowersoft ApowerCompress 3.1.0 for free.

      - -

      Step 1: Download Apowersoft ApowerCompress 3.1.0

      - -

      The first step is to download the software from the official website of Apowersoft. You can click on the link below to go to the download page.

      -

      Apowersoft ApowerCompress 3.1.0 Activation Download


      Download Zip ★★★★★ https://tinurll.com/2uznH2



      - -Download Apowersoft ApowerCompress 3.1.0 - -

      Once you are on the download page, you will see a button that says "Download Windows". Click on it and the software will start downloading automatically.

      - -

      The file size is about 51 MB, so it should not take long to download. After the download is complete, you can find the file in your default download folder.

      - -

      Step 2: Install Apowersoft ApowerCompress 3.1.0

      - -

      The next step is to install the software on your computer. To do that, you need to double-click on the downloaded file and follow the instructions on the screen.

      - -

      You will see a window that asks you to choose a language for the installation. You can select your preferred language from the drop-down menu and click "OK".

      - -

      Then, you will see another window that asks you to agree to the terms and conditions of the software. You need to check the box that says "I accept the agreement" and click "Next".

      - -

      After that, you will see a window that asks you to choose a destination folder for the software. You can keep the default folder or change it by clicking on "Browse". Then, click "Next".

      - -

      Finally, you will see a window that asks you to confirm the installation settings. You can review them and click "Install" to start the installation process.

      - -

      The installation process should take a few minutes. When it is done, you will see a window that says "Completing the ApowerCompress Setup Wizard". You can check the box that says "Launch ApowerCompress" and click "Finish" to open the software.

      -

      - -

      Step 3: Activate Apowersoft ApowerCompress 3.1.0

      - -

      The last step is to activate the software with a crack file. To do that, you need to download the crack file from the link below.

      - -Download Apowersoft ApowerCompress 3.1.0 Crack - -

      The crack file is a zip file that contains two files: apowercompress.exe and apowercompress.dll. You need to extract them to your desktop or any other folder.

      - -

      Then, you need to copy both files and paste them into the installation folder of Apowersoft ApowerCompress 3.1.0. The default installation folder is C:\Program Files (x86)\Apowersoft\ApowerCompress.

      - -

      When you paste the files, you will see a window that asks you to confirm replacing the existing files. You need to click "Replace the files in the destination" to overwrite them.

      - -

      After that, you can launch the software from your desktop or start menu. You will see a window that says "ApowerCompress has been activated successfully". You can click "OK" and enjoy using the software without any limitations.

      - -

      Conclusion

      - -

      Apowersoft ApowerCompress 3.1.0 is a powerful and easy-to-use file compressor that can help you reduce file size and save disk space. It supports compressing pictures, videos, and PDFs in one click, with high compression rate and quality. It also supports various formats, resolutions, and compression types.

      - -

      In this article, we have shown you how to download and activate Apowersoft ApowerCompress 3.1.0 for free with a crack file. We hope this article was helpful for you and you can enjoy using this software without any problems.

      -

      Step 4: Use Apowersoft ApowerCompress 3.1.0 to Compress Files

      - -

      Now that you have downloaded and activated Apowersoft ApowerCompress 3.1.0, you can start using it to compress your files in one click. Here are some simple steps to follow:

      - -
        -
      • Launch the software from your desktop or start menu.
      • -
      • Select the type of file you want to compress: image, video, or PDF.
      • -
      • Add the files you want to compress by clicking on the "Add file" or "Add folder" button, or by dragging and dropping them into the software.
      • -
      • Choose the compression type: size, normal, or quality. You can also customize the output format, resolution, width, height, frame rate, etc.
      • -
      • Click on the "Compress" button and wait for the process to finish.
      • -
      • Find the compressed files in the output folder and check their size and quality.
      • -
      - -

      You can also watch this video guide to learn how to use Apowersoft ApowerCompress 3.1.0:

      - - - -

      Benefits of Using Apowersoft ApowerCompress 3.1.0

      - -

      Apowersoft ApowerCompress 3.1.0 is not just a file compression software, but also a file optimization tool that can help you save disk space, reduce upload and download time, and improve file sharing experience. Here are some of the benefits of using this software:

      - -
        -
      • It supports compressing images, videos, and PDFs in one click, with high compression rate and quality.
      • -
      • It supports various formats, resolutions, and compression types.
      • -
      • It allows you to adjust video output resolution, crop video, select output format, batch compress files, and so on.
      • -
      • It adopts the most advanced compression technology to ensure fast and stable performance.
      • -
      • It is easy to use, with a beautiful and intuitive user-interface.
      • -
      • It is free to download and activate with a crack file.
      • -
      - -

      Conclusion

      - -

      In this article, we have shown you how to download and activate Apowersoft ApowerCompress 3.1.0 for free with a crack file. We have also shown you how to use it to compress your files in one click, with high compression rate and quality. We have also listed some of the benefits of using this software.

      - -

      Apowersoft ApowerCompress 3.1.0 is a powerful and easy-to-use file compressor that can help you reduce file size and save disk space. It supports compressing pictures, videos, and PDFs in one click, with high compression rate and quality. It also supports various formats, resolutions, and compression types.

      - -

      If you are looking for a file compression software that is easy to use, fast, and reliable, you should try Apowersoft ApowerCompress 3.1.0. You can download it from the links below and activate it with a crack file.

      - -Download Apowersoft ApowerCompress 3.1.0 - -Download Apowersoft ApowerCompress 3.1.0 Crack - -

      We hope this article was helpful for you and you can enjoy using this software without any problems.

      -

      What Users Say About Apowersoft ApowerCompress 3.1.0

      - -

      Apowersoft ApowerCompress 3.1.0 has received many positive reviews from users who have tried it and found it useful and effective. Here are some of the testimonials from real users:

      - -
      -

      "I have used more than twenty kinds of compression tools. They have different advantages and disadvantages. ApowerCompress is the best one for me. I like its simple interface, design, and functions. Thumbs up!" - Lily

      -
      - -
      -

      "I intended to share my video but the size exceeded the limitation. Luckily, I can now use this free file optimizer to reduce the size of my video. It is really very convenient to use. Thanks a lot for developing such an excellent tool!" - Peter

      -
      - -
      -

      "It is free to use, has no adware and I even don't need to sign up for an account. It saved me much time. I really want to say thank-you to the ApowerCompress team. Keep on improving your program! LOL." - Gloria

      -
      - -

      You can read more user reviews on Trustpilot, TechRadar, CNET Download, and other websites.

      - -

      How to Get Help and Support for Apowersoft ApowerCompress 3.1.0

      - -

      If you have any questions or problems with Apowersoft ApowerCompress 3.1.0, you can get help and support from the official website of Apowersoft. You can find a video guide, a user manual, a FAQ page, and a forum on the website.

      - -

      You can also contact the customer service team by email or live chat. The email address is support@apowersoft.com and the live chat is available on the website www.apowersoft.com.

      - -

      The customer service team is friendly and professional, and they will try their best to solve your issues as soon as possible.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Glenn Gould On Television The Complete CBC Broadcasts 1954 To 1977 DVD 1 10.md b/spaces/rorallitri/biomedical-language-models/logs/Glenn Gould On Television The Complete CBC Broadcasts 1954 To 1977 DVD 1 10.md deleted file mode 100644 index 1a9b9d0f61c17813a8a924f1850ed8a01d78d9a6..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Glenn Gould On Television The Complete CBC Broadcasts 1954 To 1977 DVD 1 10.md +++ /dev/null @@ -1,5 +0,0 @@ - -

      Sony has gathered together almost 20 hours of documentary, performance, interview, spoken word, short illustrated lectures, and music-making featuring the Canadian pianist, Glenn Gould (1932-1982), as he appeared on CBC television (the Canadian Broadcasting Corporation) between 1954 and 1977. The ten DVDs are priced so reasonably (at not much over $10 each) that the collection represents a real bargain. It's something that listeners familiar with those broadcasts, those curious about Gould, and those willing to suspend, perhaps, awareness of the exaggerations and cult of the pianist in order to experience highly significant and compelling music-making will want to get.

      -

      Glenn Gould on Television The Complete CBC Broadcasts 1954 to 1977 DVD 1 10


      Download File - https://tinurll.com/2uzmC4



      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/russel0719/deepfake_detector/training/__init__.py b/spaces/russel0719/deepfake_detector/training/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/dataset.py b/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/dataset.py deleted file mode 100644 index 7713ea2f8bc94d202d2dfbe830af3cb96b1e803d..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/models/stylegan2/stylegan2-pytorch/dataset.py +++ /dev/null @@ -1,40 +0,0 @@ -from io import BytesIO - -import lmdb -from PIL import Image -from torch.utils.data import Dataset - - -class MultiResolutionDataset(Dataset): - def __init__(self, path, transform, resolution=256): - self.env = lmdb.open( - path, - max_readers=32, - readonly=True, - lock=False, - readahead=False, - meminit=False, - ) - - if not self.env: - raise IOError('Cannot open lmdb dataset', path) - - with self.env.begin(write=False) as txn: - self.length = int(txn.get('length'.encode('utf-8')).decode('utf-8')) - - self.resolution = resolution - self.transform = transform - - def __len__(self): - return self.length - - def __getitem__(self, index): - with self.env.begin(write=False) as txn: - key = f'{self.resolution}-{str(index).zfill(5)}'.encode('utf-8') - img_bytes = txn.get(key) - - buffer = BytesIO(img_bytes) - img = Image.open(buffer) - img = self.transform(img) - - return img diff --git a/spaces/sarthakrw/web-query/app.py b/spaces/sarthakrw/web-query/app.py deleted file mode 100644 index dc5ba0f4bfa0a757bcb914a508f2ce29163675cc..0000000000000000000000000000000000000000 --- a/spaces/sarthakrw/web-query/app.py +++ /dev/null @@ -1,61 +0,0 @@ -from ctransformers import AutoModelForCausalLM - -model = AutoModelForCausalLM.from_pretrained( - model_path_or_repo_id="TheBloke/Llama-2-7B-chat-GGML", - max_new_tokens=512, - temperature=0.6, - top_p=0.95, - repetition_penalty=1.15 -) - -system_message = """ -You are a helpful, respectful and honest assistant. Your job is to answer the users query as best as possible given the Web Page Content. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. If you DO NOT KNOW THE ANSWER DO NOT SHARE FALSE INFORMATION. -You have been given scraped text content of a webpage under the section called "Web Page Content". Using this information answer the users query. However, if the webpage DOES NOT contain the answer to the query, you CAN answer based on your existing knowledge IF you are sure of the answer, but ALWAYS let the user know when doing so. -""" - -def generate_prompt(system_message, context, prompt): - prompt=f'''[INST] <> -{system_message} -<> - -Web Page Content: -``` -{context} -``` - -{prompt} [/INST]''' - - return prompt - -import requests -from bs4 import BeautifulSoup -import re - -def scraper(url): - req = requests.get(url) - soup = BeautifulSoup(req.content, "html.parser") - context = soup.get_text() - relevant_text = soup.get_text() - cleaned_text = re.sub(r'\s+', ' ', relevant_text).strip() - - return cleaned_text - -def run(url, input): - context = scraper(url) - response = model(generate_prompt(system_message=system_message, context=context, prompt=input)) - - return response - -import gradio as gr - -# Create a Gradio interface -iface = gr.Interface( - fn=run, - inputs=["text","text"], - outputs="text", - title="Web Query App", - description="Enter the webpage url and your query\nIMPORTANT: Larger webpages are likely to cause error due to lack of computational resources" -) - -# Launch the interface -iface.launch(inline=False) \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Alcpt Form 63 PORTABLE.md b/spaces/scedlatioru/img-to-music/example/Alcpt Form 63 PORTABLE.md deleted file mode 100644 index 61991c83c75c50a6291e3c0cd5d858dc5137d2eb..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Alcpt Form 63 PORTABLE.md +++ /dev/null @@ -1,26 +0,0 @@ - -

      What is ALCPT Form 63 and How to Prepare for It?

      -

      ALCPT Form 63 is an English language test designed to measure English ability levels through listening and reading. It is part of the American Language Course Placement Test (ALCPT) series, which are used by the Defense Language Institute English Language Center (DLIELC) to place students in appropriate courses and programs.

      -

      alcpt form 63


      Download File >>> https://gohhs.com/2uEzZH



      -

      The test consists of two parts: Part I - Listening and Part II - Reading. Part I has 56 multiple-choice questions that test the comprehension of spoken English in various contexts, such as conversations, announcements, instructions, and descriptions. Part II has 54 multiple-choice questions that test the comprehension of written English in various formats, such as letters, forms, charts, and passages.

      -

      The test has a time limit of 26 minutes for Part I and 30 minutes for Part II. The test is scored on a scale of 10 to 100, with higher scores indicating higher levels of proficiency. The test results are used to determine the appropriate level of instruction for each student, ranging from level 10 (beginner) to level 100 (advanced).

      -

      To prepare for ALCPT Form 63, students should practice their listening and reading skills in English as much as possible. They should also familiarize themselves with the format and types of questions on the test. There are many online resources that offer practice tests and sample questions for ALCPT Form 63, such as YouTube videos[^1^], PDF documents[^2^] [^3^], and websites. Students should also review the vocabulary and grammar topics that are covered on the test, such as verb forms, prepositions, pronouns, conjunctions, and modifiers.

      -

      By taking ALCPT Form 63, students can assess their current level of English proficiency and identify their strengths and weaknesses. The test can also help them set realistic goals and plan their learning strategies accordingly. ALCPT Form 63 is a useful tool for students who want to improve their English skills and achieve their academic and professional objectives.

      - -

      How to Improve Your English Speaking Skills

      -

      Speaking is often the hardest of the four language skills, but as soon as you can speak a little English there are lots of ways to improve quickly and have tons of fun. Here are some tips to help you improve your English speaking skills and communicate more confidently and effectively.

      -

      -
        -
      • Expand your vocabulary and study. Learning new words every day is a good way to widen your vocabulary and express yourself more clearly. You can use dictionaries, flashcards, word games, and online resources to learn new words and review them regularly. You should also study the grammar, pronunciation, and usage of the words you learn.
      • -
      • Improve your pronunciation. Pronouncing words correctly can help you avoid misunderstandings and sound more natural. You can use online tools, such as dictionaries, videos, and apps, to listen to how words are pronounced by native speakers and practice them yourself. You can also record yourself and compare your pronunciation with the original.
      • -
      • Learn the natural flow of English. English has its own rhythm, intonation, and stress patterns that make it sound different from other languages. To improve your speaking skills, you need to learn how to speak English in a way that sounds natural and fluent. You can do this by listening to authentic English materials, such as podcasts, movies, songs, and news, and imitating how the speakers talk. You can also use speech shadowing techniques, which involve repeating what you hear as closely as possible.
      • -
      -

      How to Improve Your English Reading Skills

      -

      Reading is a process of the brain and it takes time to develop: your mind has to attach meaning to the words, phrases and expressions represented by symbols, plus get to understand the grammar and structure of the language used in the passage to read. Here are some tips to help you improve your English reading skills and enjoy reading more.

      -
        -
      • Read more in English. The more you read, the more you expose yourself to different types of texts, vocabulary, grammar, and styles of writing. You can choose materials that interest you, such as books, magazines, blogs, articles, or comics. You can also read materials that are related to your goals or needs, such as textbooks, manuals, or reports.
      • -
      • Learn to read for specific purposes. Depending on what you want to achieve from reading, you can use different strategies and skills to help you understand the text better. For example, if you want to get the main idea of a text, you can skim it quickly and look for keywords or headings. If you want to find specific information in a text, you can scan it for relevant words or numbers. If you want to analyze a text in depth, you can read it slowly and carefully and take notes or highlight important points.
      • -
      • Learn from model texts. Reading texts that are well-written and structured can help you improve your own writing skills as well as your reading skills. You can learn how to organize your ideas, use transitions, support your arguments, and use appropriate language and tone for different audiences and purposes. You can also learn new vocabulary and expressions by looking up unfamiliar words or guessing their meaning from context.
      • -

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/sczhou/ProPainter/RAFT/datasets.py b/spaces/sczhou/ProPainter/RAFT/datasets.py deleted file mode 100644 index 3411fdacfb900024005e8997d07c600e963a95ca..0000000000000000000000000000000000000000 --- a/spaces/sczhou/ProPainter/RAFT/datasets.py +++ /dev/null @@ -1,235 +0,0 @@ -# Data loading based on https://github.com/NVIDIA/flownet2-pytorch - -import numpy as np -import torch -import torch.utils.data as data -import torch.nn.functional as F - -import os -import math -import random -from glob import glob -import os.path as osp - -from utils import frame_utils -from utils.augmentor import FlowAugmentor, SparseFlowAugmentor - - -class FlowDataset(data.Dataset): - def __init__(self, aug_params=None, sparse=False): - self.augmentor = None - self.sparse = sparse - if aug_params is not None: - if sparse: - self.augmentor = SparseFlowAugmentor(**aug_params) - else: - self.augmentor = FlowAugmentor(**aug_params) - - self.is_test = False - self.init_seed = False - self.flow_list = [] - self.image_list = [] - self.extra_info = [] - - def __getitem__(self, index): - - if self.is_test: - img1 = frame_utils.read_gen(self.image_list[index][0]) - img2 = frame_utils.read_gen(self.image_list[index][1]) - img1 = np.array(img1).astype(np.uint8)[..., :3] - img2 = np.array(img2).astype(np.uint8)[..., :3] - img1 = torch.from_numpy(img1).permute(2, 0, 1).float() - img2 = torch.from_numpy(img2).permute(2, 0, 1).float() - return img1, img2, self.extra_info[index] - - if not self.init_seed: - worker_info = torch.utils.data.get_worker_info() - if worker_info is not None: - torch.manual_seed(worker_info.id) - np.random.seed(worker_info.id) - random.seed(worker_info.id) - self.init_seed = True - - index = index % len(self.image_list) - valid = None - if self.sparse: - flow, valid = frame_utils.readFlowKITTI(self.flow_list[index]) - else: - flow = frame_utils.read_gen(self.flow_list[index]) - - img1 = frame_utils.read_gen(self.image_list[index][0]) - img2 = frame_utils.read_gen(self.image_list[index][1]) - - flow = np.array(flow).astype(np.float32) - img1 = np.array(img1).astype(np.uint8) - img2 = np.array(img2).astype(np.uint8) - - # grayscale images - if len(img1.shape) == 2: - img1 = np.tile(img1[...,None], (1, 1, 3)) - img2 = np.tile(img2[...,None], (1, 1, 3)) - else: - img1 = img1[..., :3] - img2 = img2[..., :3] - - if self.augmentor is not None: - if self.sparse: - img1, img2, flow, valid = self.augmentor(img1, img2, flow, valid) - else: - img1, img2, flow = self.augmentor(img1, img2, flow) - - img1 = torch.from_numpy(img1).permute(2, 0, 1).float() - img2 = torch.from_numpy(img2).permute(2, 0, 1).float() - flow = torch.from_numpy(flow).permute(2, 0, 1).float() - - if valid is not None: - valid = torch.from_numpy(valid) - else: - valid = (flow[0].abs() < 1000) & (flow[1].abs() < 1000) - - return img1, img2, flow, valid.float() - - - def __rmul__(self, v): - self.flow_list = v * self.flow_list - self.image_list = v * self.image_list - return self - - def __len__(self): - return len(self.image_list) - - -class MpiSintel(FlowDataset): - def __init__(self, aug_params=None, split='training', root='datasets/Sintel', dstype='clean'): - super(MpiSintel, self).__init__(aug_params) - flow_root = osp.join(root, split, 'flow') - image_root = osp.join(root, split, dstype) - - if split == 'test': - self.is_test = True - - for scene in os.listdir(image_root): - image_list = sorted(glob(osp.join(image_root, scene, '*.png'))) - for i in range(len(image_list)-1): - self.image_list += [ [image_list[i], image_list[i+1]] ] - self.extra_info += [ (scene, i) ] # scene and frame_id - - if split != 'test': - self.flow_list += sorted(glob(osp.join(flow_root, scene, '*.flo'))) - - -class FlyingChairs(FlowDataset): - def __init__(self, aug_params=None, split='train', root='datasets/FlyingChairs_release/data'): - super(FlyingChairs, self).__init__(aug_params) - - images = sorted(glob(osp.join(root, '*.ppm'))) - flows = sorted(glob(osp.join(root, '*.flo'))) - assert (len(images)//2 == len(flows)) - - split_list = np.loadtxt('chairs_split.txt', dtype=np.int32) - for i in range(len(flows)): - xid = split_list[i] - if (split=='training' and xid==1) or (split=='validation' and xid==2): - self.flow_list += [ flows[i] ] - self.image_list += [ [images[2*i], images[2*i+1]] ] - - -class FlyingThings3D(FlowDataset): - def __init__(self, aug_params=None, root='datasets/FlyingThings3D', dstype='frames_cleanpass'): - super(FlyingThings3D, self).__init__(aug_params) - - for cam in ['left']: - for direction in ['into_future', 'into_past']: - image_dirs = sorted(glob(osp.join(root, dstype, 'TRAIN/*/*'))) - image_dirs = sorted([osp.join(f, cam) for f in image_dirs]) - - flow_dirs = sorted(glob(osp.join(root, 'optical_flow/TRAIN/*/*'))) - flow_dirs = sorted([osp.join(f, direction, cam) for f in flow_dirs]) - - for idir, fdir in zip(image_dirs, flow_dirs): - images = sorted(glob(osp.join(idir, '*.png')) ) - flows = sorted(glob(osp.join(fdir, '*.pfm')) ) - for i in range(len(flows)-1): - if direction == 'into_future': - self.image_list += [ [images[i], images[i+1]] ] - self.flow_list += [ flows[i] ] - elif direction == 'into_past': - self.image_list += [ [images[i+1], images[i]] ] - self.flow_list += [ flows[i+1] ] - - -class KITTI(FlowDataset): - def __init__(self, aug_params=None, split='training', root='datasets/KITTI'): - super(KITTI, self).__init__(aug_params, sparse=True) - if split == 'testing': - self.is_test = True - - root = osp.join(root, split) - images1 = sorted(glob(osp.join(root, 'image_2/*_10.png'))) - images2 = sorted(glob(osp.join(root, 'image_2/*_11.png'))) - - for img1, img2 in zip(images1, images2): - frame_id = img1.split('/')[-1] - self.extra_info += [ [frame_id] ] - self.image_list += [ [img1, img2] ] - - if split == 'training': - self.flow_list = sorted(glob(osp.join(root, 'flow_occ/*_10.png'))) - - -class HD1K(FlowDataset): - def __init__(self, aug_params=None, root='datasets/HD1k'): - super(HD1K, self).__init__(aug_params, sparse=True) - - seq_ix = 0 - while 1: - flows = sorted(glob(os.path.join(root, 'hd1k_flow_gt', 'flow_occ/%06d_*.png' % seq_ix))) - images = sorted(glob(os.path.join(root, 'hd1k_input', 'image_2/%06d_*.png' % seq_ix))) - - if len(flows) == 0: - break - - for i in range(len(flows)-1): - self.flow_list += [flows[i]] - self.image_list += [ [images[i], images[i+1]] ] - - seq_ix += 1 - - -def fetch_dataloader(args, TRAIN_DS='C+T+K+S+H'): - """ Create the data loader for the corresponding trainign set """ - - if args.stage == 'chairs': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.1, 'max_scale': 1.0, 'do_flip': True} - train_dataset = FlyingChairs(aug_params, split='training') - - elif args.stage == 'things': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.4, 'max_scale': 0.8, 'do_flip': True} - clean_dataset = FlyingThings3D(aug_params, dstype='frames_cleanpass') - final_dataset = FlyingThings3D(aug_params, dstype='frames_finalpass') - train_dataset = clean_dataset + final_dataset - - elif args.stage == 'sintel': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.2, 'max_scale': 0.6, 'do_flip': True} - things = FlyingThings3D(aug_params, dstype='frames_cleanpass') - sintel_clean = MpiSintel(aug_params, split='training', dstype='clean') - sintel_final = MpiSintel(aug_params, split='training', dstype='final') - - if TRAIN_DS == 'C+T+K+S+H': - kitti = KITTI({'crop_size': args.image_size, 'min_scale': -0.3, 'max_scale': 0.5, 'do_flip': True}) - hd1k = HD1K({'crop_size': args.image_size, 'min_scale': -0.5, 'max_scale': 0.2, 'do_flip': True}) - train_dataset = 100*sintel_clean + 100*sintel_final + 200*kitti + 5*hd1k + things - - elif TRAIN_DS == 'C+T+K/S': - train_dataset = 100*sintel_clean + 100*sintel_final + things - - elif args.stage == 'kitti': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.2, 'max_scale': 0.4, 'do_flip': False} - train_dataset = KITTI(aug_params, split='training') - - train_loader = data.DataLoader(train_dataset, batch_size=args.batch_size, - pin_memory=False, shuffle=True, num_workers=4, drop_last=True) - - print('Training with %d image pairs' % len(train_dataset)) - return train_loader - diff --git a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/utils/__init__.py b/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/segments-tobias/conex/espnet2/tts/tacotron2.py b/spaces/segments-tobias/conex/espnet2/tts/tacotron2.py deleted file mode 100644 index d5c8b3cc71482dbd7a4357a120a1b43c21115d69..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/tts/tacotron2.py +++ /dev/null @@ -1,463 +0,0 @@ -# Copyright 2020 Nagoya University (Tomoki Hayashi) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Tacotron 2 related modules for ESPnet2.""" - -import logging -from typing import Dict -from typing import Sequence -from typing import Tuple - -import torch -import torch.nn.functional as F -from typeguard import check_argument_types - -from espnet.nets.pytorch_backend.e2e_tts_tacotron2 import GuidedAttentionLoss -from espnet.nets.pytorch_backend.e2e_tts_tacotron2 import Tacotron2Loss -from espnet.nets.pytorch_backend.nets_utils import make_pad_mask -from espnet.nets.pytorch_backend.rnn.attentions import AttForward -from espnet.nets.pytorch_backend.rnn.attentions import AttForwardTA -from espnet.nets.pytorch_backend.rnn.attentions import AttLoc -from espnet.nets.pytorch_backend.tacotron2.decoder import Decoder -from espnet.nets.pytorch_backend.tacotron2.encoder import Encoder -from espnet2.torch_utils.device_funcs import force_gatherable -from espnet2.tts.abs_tts import AbsTTS -from espnet2.tts.gst.style_encoder import StyleEncoder - - -class Tacotron2(AbsTTS): - """Tacotron2 module for end-to-end text-to-speech. - - This is a module of Spectrogram prediction network in Tacotron2 described - in `Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions`_, - which converts the sequence of characters into the sequence of Mel-filterbanks. - - .. _`Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions`: - https://arxiv.org/abs/1712.05884 - - Args: - idim (int): Dimension of the inputs. - odim: (int) Dimension of the outputs. - spk_embed_dim (int, optional): Dimension of the speaker embedding. - embed_dim (int, optional): Dimension of character embedding. - elayers (int, optional): The number of encoder blstm layers. - eunits (int, optional): The number of encoder blstm units. - econv_layers (int, optional): The number of encoder conv layers. - econv_filts (int, optional): The number of encoder conv filter size. - econv_chans (int, optional): The number of encoder conv filter channels. - dlayers (int, optional): The number of decoder lstm layers. - dunits (int, optional): The number of decoder lstm units. - prenet_layers (int, optional): The number of prenet layers. - prenet_units (int, optional): The number of prenet units. - postnet_layers (int, optional): The number of postnet layers. - postnet_filts (int, optional): The number of postnet filter size. - postnet_chans (int, optional): The number of postnet filter channels. - output_activation (str, optional): The name of activation function for outputs. - adim (int, optional): The number of dimension of mlp in attention. - aconv_chans (int, optional): The number of attention conv filter channels. - aconv_filts (int, optional): The number of attention conv filter size. - cumulate_att_w (bool, optional): Whether to cumulate previous attention weight. - use_batch_norm (bool, optional): Whether to use batch normalization. - use_concate (bool, optional): Whether to concatenate encoder embedding with - decoder lstm outputs. - reduction_factor (int, optional): Reduction factor. - spk_embed_dim (int, optional): Number of speaker embedding dimenstions. - spk_embed_integration_type (str, optional): How to integrate speaker embedding. - use_gst (str, optional): Whether to use global style token. - gst_tokens (int, optional): The number of GST embeddings. - gst_heads (int, optional): The number of heads in GST multihead attention. - gst_conv_layers (int, optional): The number of conv layers in GST. - gst_conv_chans_list: (Sequence[int], optional): - List of the number of channels of conv layers in GST. - gst_conv_kernel_size (int, optional): Kernal size of conv layers in GST. - gst_conv_stride (int, optional): Stride size of conv layers in GST. - gst_gru_layers (int, optional): The number of GRU layers in GST. - gst_gru_units (int, optional): The number of GRU units in GST. - dropout_rate (float, optional): Dropout rate. - zoneout_rate (float, optional): Zoneout rate. - use_masking (bool, optional): Whether to mask padded part in loss calculation. - use_weighted_masking (bool, optional): Whether to apply weighted masking in - loss calculation. - bce_pos_weight (float, optional): Weight of positive sample of stop token - (only for use_masking=True). - loss_type (str, optional): How to calculate loss. - use_guided_attn_loss (bool, optional): Whether to use guided attention loss. - guided_attn_loss_sigma (float, optional): Sigma in guided attention loss. - guided_attn_loss_lamdba (float, optional): Lambda in guided attention loss. - - """ - - def __init__( - self, - # network structure related - idim: int, - odim: int, - embed_dim: int = 512, - elayers: int = 1, - eunits: int = 512, - econv_layers: int = 3, - econv_chans: int = 512, - econv_filts: int = 5, - atype: str = "location", - adim: int = 512, - aconv_chans: int = 32, - aconv_filts: int = 15, - cumulate_att_w: bool = True, - dlayers: int = 2, - dunits: int = 1024, - prenet_layers: int = 2, - prenet_units: int = 256, - postnet_layers: int = 5, - postnet_chans: int = 512, - postnet_filts: int = 5, - output_activation: str = None, - use_batch_norm: bool = True, - use_concate: bool = True, - use_residual: bool = False, - reduction_factor: int = 1, - spk_embed_dim: int = None, - spk_embed_integration_type: str = "concat", - use_gst: bool = False, - gst_tokens: int = 10, - gst_heads: int = 4, - gst_conv_layers: int = 6, - gst_conv_chans_list: Sequence[int] = (32, 32, 64, 64, 128, 128), - gst_conv_kernel_size: int = 3, - gst_conv_stride: int = 2, - gst_gru_layers: int = 1, - gst_gru_units: int = 128, - # training related - dropout_rate: float = 0.5, - zoneout_rate: float = 0.1, - use_masking: bool = True, - use_weighted_masking: bool = False, - bce_pos_weight: float = 5.0, - loss_type: str = "L1+L2", - use_guided_attn_loss: bool = True, - guided_attn_loss_sigma: float = 0.4, - guided_attn_loss_lambda: float = 1.0, - ): - """Initialize Tacotron2 module.""" - assert check_argument_types() - super().__init__() - - # store hyperparameters - self.idim = idim - self.odim = odim - self.eos = idim - 1 - self.spk_embed_dim = spk_embed_dim - self.cumulate_att_w = cumulate_att_w - self.reduction_factor = reduction_factor - self.use_gst = use_gst - self.use_guided_attn_loss = use_guided_attn_loss - self.loss_type = loss_type - if self.spk_embed_dim is not None: - self.spk_embed_integration_type = spk_embed_integration_type - - # define activation function for the final output - if output_activation is None: - self.output_activation_fn = None - elif hasattr(F, output_activation): - self.output_activation_fn = getattr(F, output_activation) - else: - raise ValueError( - f"there is no such an activation function. " f"({output_activation})" - ) - - # set padding idx - padding_idx = 0 - self.padding_idx = padding_idx - - # define network modules - self.enc = Encoder( - idim=idim, - embed_dim=embed_dim, - elayers=elayers, - eunits=eunits, - econv_layers=econv_layers, - econv_chans=econv_chans, - econv_filts=econv_filts, - use_batch_norm=use_batch_norm, - use_residual=use_residual, - dropout_rate=dropout_rate, - padding_idx=padding_idx, - ) - - if self.use_gst: - self.gst = StyleEncoder( - idim=odim, # the input is mel-spectrogram - gst_tokens=gst_tokens, - gst_token_dim=eunits, - gst_heads=gst_heads, - conv_layers=gst_conv_layers, - conv_chans_list=gst_conv_chans_list, - conv_kernel_size=gst_conv_kernel_size, - conv_stride=gst_conv_stride, - gru_layers=gst_gru_layers, - gru_units=gst_gru_units, - ) - - if spk_embed_dim is None: - dec_idim = eunits - elif spk_embed_integration_type == "concat": - dec_idim = eunits + spk_embed_dim - elif spk_embed_integration_type == "add": - dec_idim = eunits - self.projection = torch.nn.Linear(self.spk_embed_dim, eunits) - else: - raise ValueError(f"{spk_embed_integration_type} is not supported.") - - if atype == "location": - att = AttLoc(dec_idim, dunits, adim, aconv_chans, aconv_filts) - elif atype == "forward": - att = AttForward(dec_idim, dunits, adim, aconv_chans, aconv_filts) - if self.cumulate_att_w: - logging.warning( - "cumulation of attention weights is disabled " - "in forward attention." - ) - self.cumulate_att_w = False - elif atype == "forward_ta": - att = AttForwardTA(dec_idim, dunits, adim, aconv_chans, aconv_filts, odim) - if self.cumulate_att_w: - logging.warning( - "cumulation of attention weights is disabled " - "in forward attention." - ) - self.cumulate_att_w = False - else: - raise NotImplementedError("Support only location or forward") - self.dec = Decoder( - idim=dec_idim, - odim=odim, - att=att, - dlayers=dlayers, - dunits=dunits, - prenet_layers=prenet_layers, - prenet_units=prenet_units, - postnet_layers=postnet_layers, - postnet_chans=postnet_chans, - postnet_filts=postnet_filts, - output_activation_fn=self.output_activation_fn, - cumulate_att_w=self.cumulate_att_w, - use_batch_norm=use_batch_norm, - use_concate=use_concate, - dropout_rate=dropout_rate, - zoneout_rate=zoneout_rate, - reduction_factor=reduction_factor, - ) - self.taco2_loss = Tacotron2Loss( - use_masking=use_masking, - use_weighted_masking=use_weighted_masking, - bce_pos_weight=bce_pos_weight, - ) - if self.use_guided_attn_loss: - self.attn_loss = GuidedAttentionLoss( - sigma=guided_attn_loss_sigma, - alpha=guided_attn_loss_lambda, - ) - - def forward( - self, - text: torch.Tensor, - text_lengths: torch.Tensor, - speech: torch.Tensor, - speech_lengths: torch.Tensor, - spembs: torch.Tensor = None, - ) -> Tuple[torch.Tensor, Dict[str, torch.Tensor], torch.Tensor]: - """Calculate forward propagation. - - Args: - text (LongTensor): Batch of padded character ids (B, Tmax). - text_lengths (LongTensor): Batch of lengths of each input batch (B,). - speech (Tensor): Batch of padded target features (B, Lmax, odim). - speech_lengths (LongTensor): Batch of the lengths of each target (B,). - spembs (Tensor, optional): Batch of speaker embeddings (B, spk_embed_dim). - - Returns: - Tensor: Loss scalar value. - Dict: Statistics to be monitored. - Tensor: Weight value. - - """ - text = text[:, : text_lengths.max()] # for data-parallel - speech = speech[:, : speech_lengths.max()] # for data-parallel - - batch_size = text.size(0) - - # Add eos at the last of sequence - xs = F.pad(text, [0, 1], "constant", self.padding_idx) - for i, l in enumerate(text_lengths): - xs[i, l] = self.eos - ilens = text_lengths + 1 - - ys = speech - olens = speech_lengths - - # make labels for stop prediction - labels = make_pad_mask(olens - 1).to(ys.device, ys.dtype) - labels = F.pad(labels, [0, 1], "constant", 1.0) - - # calculate tacotron2 outputs - after_outs, before_outs, logits, att_ws = self._forward( - xs, ilens, ys, olens, spembs - ) - - # modify mod part of groundtruth - if self.reduction_factor > 1: - olens = olens.new([olen - olen % self.reduction_factor for olen in olens]) - max_out = max(olens) - ys = ys[:, :max_out] - labels = labels[:, :max_out] - labels[:, -1] = 1.0 # make sure at least one frame has 1 - - # calculate taco2 loss - l1_loss, mse_loss, bce_loss = self.taco2_loss( - after_outs, before_outs, logits, ys, labels, olens - ) - if self.loss_type == "L1+L2": - loss = l1_loss + mse_loss + bce_loss - elif self.loss_type == "L1": - loss = l1_loss + bce_loss - elif self.loss_type == "L2": - loss = mse_loss + bce_loss - else: - raise ValueError(f"unknown --loss-type {self.loss_type}") - - stats = dict( - l1_loss=l1_loss.item(), - mse_loss=mse_loss.item(), - bce_loss=bce_loss.item(), - ) - - # calculate attention loss - if self.use_guided_attn_loss: - # NOTE(kan-bayashi): length of output for auto-regressive - # input will be changed when r > 1 - if self.reduction_factor > 1: - olens_in = olens.new([olen // self.reduction_factor for olen in olens]) - else: - olens_in = olens - attn_loss = self.attn_loss(att_ws, ilens, olens_in) - loss = loss + attn_loss - stats.update(attn_loss=attn_loss.item()) - - stats.update(loss=loss.item()) - - loss, stats, weight = force_gatherable((loss, stats, batch_size), loss.device) - return loss, stats, weight - - def _forward( - self, - xs: torch.Tensor, - ilens: torch.Tensor, - ys: torch.Tensor, - olens: torch.Tensor, - spembs: torch.Tensor, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - hs, hlens = self.enc(xs, ilens) - if self.use_gst: - style_embs = self.gst(ys) - hs = hs + style_embs.unsqueeze(1) - if self.spk_embed_dim is not None: - hs = self._integrate_with_spk_embed(hs, spembs) - return self.dec(hs, hlens, ys) - - def inference( - self, - text: torch.Tensor, - speech: torch.Tensor = None, - spembs: torch.Tensor = None, - threshold: float = 0.5, - minlenratio: float = 0.0, - maxlenratio: float = 10.0, - use_att_constraint: bool = False, - backward_window: int = 1, - forward_window: int = 3, - use_teacher_forcing: bool = False, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """Generate the sequence of features given the sequences of characters. - - Args: - text (LongTensor): Input sequence of characters (T,). - speech (Tensor, optional): Feature sequence to extract style (N, idim). - spembs (Tensor, optional): Speaker embedding vector (spk_embed_dim,). - threshold (float, optional): Threshold in inference. - minlenratio (float, optional): Minimum length ratio in inference. - maxlenratio (float, optional): Maximum length ratio in inference. - use_att_constraint (bool, optional): Whether to apply attention constraint. - backward_window (int, optional): Backward window in attention constraint. - forward_window (int, optional): Forward window in attention constraint. - use_teacher_forcing (bool, optional): Whether to use teacher forcing. - - Returns: - Tensor: Output sequence of features (L, odim). - Tensor: Output sequence of stop probabilities (L,). - Tensor: Attention weights (L, T). - - """ - x = text - y = speech - spemb = spembs - - # add eos at the last of sequence - x = F.pad(x, [0, 1], "constant", self.eos) - - # inference with teacher forcing - if use_teacher_forcing: - assert speech is not None, "speech must be provided with teacher forcing." - - xs, ys = x.unsqueeze(0), y.unsqueeze(0) - spembs = None if spemb is None else spemb.unsqueeze(0) - ilens = x.new_tensor([xs.size(1)]).long() - olens = y.new_tensor([ys.size(1)]).long() - outs, _, _, att_ws = self._forward(xs, ilens, ys, olens, spembs) - - return outs[0], None, att_ws[0] - - # inference - h = self.enc.inference(x) - if self.use_gst: - style_emb = self.gst(y.unsqueeze(0)) - h = h + style_emb - if self.spk_embed_dim is not None: - hs, spembs = h.unsqueeze(0), spemb.unsqueeze(0) - h = self._integrate_with_spk_embed(hs, spembs)[0] - outs, probs, att_ws = self.dec.inference( - h, - threshold=threshold, - minlenratio=minlenratio, - maxlenratio=maxlenratio, - use_att_constraint=use_att_constraint, - backward_window=backward_window, - forward_window=forward_window, - ) - - return outs, probs, att_ws - - def _integrate_with_spk_embed( - self, hs: torch.Tensor, spembs: torch.Tensor - ) -> torch.Tensor: - """Integrate speaker embedding with hidden states. - - Args: - hs (Tensor): Batch of hidden state sequences (B, Tmax, eunits). - spembs (Tensor): Batch of speaker embeddings (B, spk_embed_dim). - - Returns: - Tensor: Batch of integrated hidden state sequences (B, Tmax, eunits) if - integration_type is "add" else (B, Tmax, eunits + spk_embed_dim). - - """ - if self.spk_embed_integration_type == "add": - # apply projection and then add to hidden states - spembs = self.projection(F.normalize(spembs)) - hs = hs + spembs.unsqueeze(1) - elif self.spk_embed_integration_type == "concat": - # concat hidden states with spk embeds - spembs = F.normalize(spembs).unsqueeze(1).expand(-1, hs.size(1), -1) - hs = torch.cat([hs, spembs], dim=-1) - else: - raise NotImplementedError("support only add or concat.") - - return hs diff --git a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py b/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py deleted file mode 100644 index 9158d5f6260ec74bded95377d382387430d7cd70..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py +++ /dev/null @@ -1,43 +0,0 @@ -batch_size = 1 -modelname = "groundingdino" -backbone = "swin_T_224_1k" -position_embedding = "sine" -pe_temperatureH = 20 -pe_temperatureW = 20 -return_interm_indices = [1, 2, 3] -backbone_freeze_keywords = None -enc_layers = 6 -dec_layers = 6 -pre_norm = False -dim_feedforward = 2048 -hidden_dim = 256 -dropout = 0.0 -nheads = 8 -num_queries = 900 -query_dim = 4 -num_patterns = 0 -num_feature_levels = 4 -enc_n_points = 4 -dec_n_points = 4 -two_stage_type = "standard" -two_stage_bbox_embed_share = False -two_stage_class_embed_share = False -transformer_activation = "relu" -dec_pred_bbox_embed_share = True -dn_box_noise_scale = 1.0 -dn_label_noise_ratio = 0.5 -dn_label_coef = 1.0 -dn_bbox_coef = 1.0 -embed_init_tgt = True -dn_labelbook_size = 2000 -max_text_len = 256 -text_encoder_type = "bert-base-uncased" -use_text_enhancer = True -use_fusion_layer = True -use_checkpoint = True -use_transformer_ckpt = True -use_text_cross_attention = True -text_dropout = 0.0 -fusion_dropout = 0.0 -fusion_droppath = 0.1 -sub_sentence_present = True diff --git a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/__init__.py b/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/__init__.py deleted file mode 100644 index 168f9979a4623806934b0ff1102ac166704e7dec..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/util/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved diff --git a/spaces/shasaurabh/bird_forest/README.md b/spaces/shasaurabh/bird_forest/README.md deleted file mode 100644 index f6cdf95c1cbfeb1066527b832d21d157730adf4e..0000000000000000000000000000000000000000 --- a/spaces/shasaurabh/bird_forest/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bird Forest -emoji: 🦀 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/shigel/recipe/README.md b/spaces/shigel/recipe/README.md deleted file mode 100644 index 46e6ba37a753c8aec0d644f5c164a8ea5e77f744..0000000000000000000000000000000000000000 --- a/spaces/shigel/recipe/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 新レシピ考案AI(β) -emoji: 🌖 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -duplicated_from: shigel/aiemo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/data/datasets/register_ade20k_instance.py b/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/data/datasets/register_ade20k_instance.py deleted file mode 100644 index 1ded7095cde756dfa1d94c25b2f7d1d2e5da6313..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/data/datasets/register_ade20k_instance.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import logging -import numpy as np -import os -from PIL import Image - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.coco import load_coco_json, register_coco_instances -from detectron2.utils.file_io import PathManager - -ADE_CATEGORIES = [{'id': 7, 'name': 'bed'}, {'id': 8, 'name': 'windowpane'}, {'id': 10, 'name': 'cabinet'}, {'id': 12, 'name': 'person'}, {'id': 14, 'name': 'door'}, {'id': 15, 'name': 'table'}, {'id': 18, 'name': 'curtain'}, {'id': 19, 'name': 'chair'}, {'id': 20, 'name': 'car'}, {'id': 22, 'name': 'painting'}, {'id': 23, 'name': 'sofa'}, {'id': 24, 'name': 'shelf'}, {'id': 27, 'name': 'mirror'}, {'id': 30, 'name': 'armchair'}, {'id': 31, 'name': 'seat'}, {'id': 32, 'name': 'fence'}, {'id': 33, 'name': 'desk'}, {'id': 35, 'name': 'wardrobe'}, {'id': 36, 'name': 'lamp'}, {'id': 37, 'name': 'bathtub'}, {'id': 38, 'name': 'railing'}, {'id': 39, 'name': 'cushion'}, {'id': 41, 'name': 'box'}, {'id': 42, 'name': 'column'}, {'id': 43, 'name': 'signboard'}, {'id': 44, 'name': 'chest of drawers'}, {'id': 45, 'name': 'counter'}, {'id': 47, 'name': 'sink'}, {'id': 49, 'name': 'fireplace'}, {'id': 50, 'name': 'refrigerator'}, {'id': 53, 'name': 'stairs'}, {'id': 55, 'name': 'case'}, {'id': 56, 'name': 'pool table'}, {'id': 57, 'name': 'pillow'}, {'id': 58, 'name': 'screen door'}, {'id': 62, 'name': 'bookcase'}, {'id': 64, 'name': 'coffee table'}, {'id': 65, 'name': 'toilet'}, {'id': 66, 'name': 'flower'}, {'id': 67, 'name': 'book'}, {'id': 69, 'name': 'bench'}, {'id': 70, 'name': 'countertop'}, {'id': 71, 'name': 'stove'}, {'id': 72, 'name': 'palm'}, {'id': 73, 'name': 'kitchen island'}, {'id': 74, 'name': 'computer'}, {'id': 75, 'name': 'swivel chair'}, {'id': 76, 'name': 'boat'}, {'id': 78, 'name': 'arcade machine'}, {'id': 80, 'name': 'bus'}, {'id': 81, 'name': 'towel'}, {'id': 82, 'name': 'light'}, {'id': 83, 'name': 'truck'}, {'id': 85, 'name': 'chandelier'}, {'id': 86, 'name': 'awning'}, {'id': 87, 'name': 'streetlight'}, {'id': 88, 'name': 'booth'}, {'id': 89, 'name': 'television receiver'}, {'id': 90, 'name': 'airplane'}, {'id': 92, 'name': 'apparel'}, {'id': 93, 'name': 'pole'}, {'id': 95, 'name': 'bannister'}, {'id': 97, 'name': 'ottoman'}, {'id': 98, 'name': 'bottle'}, {'id': 102, 'name': 'van'}, {'id': 103, 'name': 'ship'}, {'id': 104, 'name': 'fountain'}, {'id': 107, 'name': 'washer'}, {'id': 108, 'name': 'plaything'}, {'id': 110, 'name': 'stool'}, {'id': 111, 'name': 'barrel'}, {'id': 112, 'name': 'basket'}, {'id': 115, 'name': 'bag'}, {'id': 116, 'name': 'minibike'}, {'id': 118, 'name': 'oven'}, {'id': 119, 'name': 'ball'}, {'id': 120, 'name': 'food'}, {'id': 121, 'name': 'step'}, {'id': 123, 'name': 'trade name'}, {'id': 124, 'name': 'microwave'}, {'id': 125, 'name': 'pot'}, {'id': 126, 'name': 'animal'}, {'id': 127, 'name': 'bicycle'}, {'id': 129, 'name': 'dishwasher'}, {'id': 130, 'name': 'screen'}, {'id': 132, 'name': 'sculpture'}, {'id': 133, 'name': 'hood'}, {'id': 134, 'name': 'sconce'}, {'id': 135, 'name': 'vase'}, {'id': 136, 'name': 'traffic light'}, {'id': 137, 'name': 'tray'}, {'id': 138, 'name': 'ashcan'}, {'id': 139, 'name': 'fan'}, {'id': 142, 'name': 'plate'}, {'id': 143, 'name': 'monitor'}, {'id': 144, 'name': 'bulletin board'}, {'id': 146, 'name': 'radiator'}, {'id': 147, 'name': 'glass'}, {'id': 148, 'name': 'clock'}, {'id': 149, 'name': 'flag'}] - - -_PREDEFINED_SPLITS = { - # point annotations without masks - "ade20k_instance_train": ( - "ADEChallengeData2016/images/training", - "ADEChallengeData2016/ade20k_instance_train.json", - ), - "ade20k_instance_val": ( - "ADEChallengeData2016/images/validation", - "ADEChallengeData2016/ade20k_instance_val.json", - ), -} - - -def _get_ade_instances_meta(): - thing_ids = [k["id"] for k in ADE_CATEGORIES] - assert len(thing_ids) == 100, len(thing_ids) - # Mapping from the incontiguous ADE category id to an id in [0, 99] - thing_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(thing_ids)} - thing_classes = [k["name"] for k in ADE_CATEGORIES] - ret = { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes, - } - return ret - - -def register_all_ade20k_instance(root): - for key, (image_root, json_file) in _PREDEFINED_SPLITS.items(): - # Assume pre-defined datasets live in `./datasets`. - register_coco_instances( - key, - _get_ade_instances_meta(), - os.path.join(root, json_file) if "://" not in json_file else json_file, - os.path.join(root, image_root), - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_ade20k_instance(_root) diff --git a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/modeling/pixel_decoder/fpn.py b/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/modeling/pixel_decoder/fpn.py deleted file mode 100644 index 7df65a178ce4a105d5c803ff5aa18aa56c44d374..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/modeling/pixel_decoder/fpn.py +++ /dev/null @@ -1,312 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import numpy as np -from typing import Callable, Dict, List, Optional, Tuple, Union - -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.init import xavier_uniform_, constant_, uniform_, normal_ -from torch.cuda.amp import autocast - -from detectron2.config import configurable -from detectron2.layers import Conv2d, DeformConv, ShapeSpec, get_norm -from detectron2.modeling import SEM_SEG_HEADS_REGISTRY - -from ..transformer_decoder.position_encoding import PositionEmbeddingSine -from ..transformer_decoder.transformer import TransformerEncoder, TransformerEncoderLayer, _get_clones, _get_activation_fn - - -def build_pixel_decoder(cfg, input_shape): - """ - Build a pixel decoder from `cfg.MODEL.MASK_FORMER.PIXEL_DECODER_NAME`. - """ - name = cfg.MODEL.SEM_SEG_HEAD.PIXEL_DECODER_NAME - model = SEM_SEG_HEADS_REGISTRY.get(name)(cfg, input_shape) - forward_features = getattr(model, "forward_features", None) - if not callable(forward_features): - raise ValueError( - "Only SEM_SEG_HEADS with forward_features method can be used as pixel decoder. " - f"Please implement forward_features for {name} to only return mask features." - ) - return model - - -# This is a modified FPN decoder. -@SEM_SEG_HEADS_REGISTRY.register() -class BasePixelDecoder(nn.Module): - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - conv_dim: int, - mask_dim: int, - norm: Optional[Union[str, Callable]] = None, - ): - """ - NOTE: this interface is experimental. - Args: - input_shape: shapes (channels and stride) of the input features - conv_dims: number of output channels for the intermediate conv layers. - mask_dim: number of output channels for the final conv layer. - norm (str or callable): normalization for all conv layers - """ - super().__init__() - - input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) - self.in_features = [k for k, v in input_shape] # starting from "res2" to "res5" - feature_channels = [v.channels for k, v in input_shape] - - lateral_convs = [] - output_convs = [] - - use_bias = norm == "" - for idx, in_channels in enumerate(feature_channels): - if idx == len(self.in_features) - 1: - output_norm = get_norm(norm, conv_dim) - output_conv = Conv2d( - in_channels, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - activation=F.relu, - ) - weight_init.c2_xavier_fill(output_conv) - self.add_module("layer_{}".format(idx + 1), output_conv) - - lateral_convs.append(None) - output_convs.append(output_conv) - else: - lateral_norm = get_norm(norm, conv_dim) - output_norm = get_norm(norm, conv_dim) - - lateral_conv = Conv2d( - in_channels, conv_dim, kernel_size=1, bias=use_bias, norm=lateral_norm - ) - output_conv = Conv2d( - conv_dim, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - activation=F.relu, - ) - weight_init.c2_xavier_fill(lateral_conv) - weight_init.c2_xavier_fill(output_conv) - self.add_module("adapter_{}".format(idx + 1), lateral_conv) - self.add_module("layer_{}".format(idx + 1), output_conv) - - lateral_convs.append(lateral_conv) - output_convs.append(output_conv) - # Place convs into top-down order (from low to high resolution) - # to make the top-down computation in forward clearer. - self.lateral_convs = lateral_convs[::-1] - self.output_convs = output_convs[::-1] - - self.mask_dim = mask_dim - self.mask_features = Conv2d( - conv_dim, - mask_dim, - kernel_size=3, - stride=1, - padding=1, - ) - weight_init.c2_xavier_fill(self.mask_features) - - self.maskformer_num_feature_levels = 3 # always use 3 scales - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = {} - ret["input_shape"] = { - k: v for k, v in input_shape.items() if k in cfg.MODEL.SEM_SEG_HEAD.IN_FEATURES - } - ret["conv_dim"] = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM - ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM - ret["norm"] = cfg.MODEL.SEM_SEG_HEAD.NORM - return ret - - def forward_features(self, features): - multi_scale_features = [] - num_cur_levels = 0 - # Reverse feature maps into top-down order (from low to high resolution) - for idx, f in enumerate(self.in_features[::-1]): - x = features[f] - lateral_conv = self.lateral_convs[idx] - output_conv = self.output_convs[idx] - if lateral_conv is None: - y = output_conv(x) - else: - cur_fpn = lateral_conv(x) - # Following FPN implementation, we use nearest upsampling here - y = cur_fpn + F.interpolate(y, size=cur_fpn.shape[-2:], mode="nearest") - y = output_conv(y) - if num_cur_levels < self.maskformer_num_feature_levels: - multi_scale_features.append(y) - num_cur_levels += 1 - return self.mask_features(y), None, multi_scale_features - - def forward(self, features, targets=None): - logger = logging.getLogger(__name__) - logger.warning("Calling forward() may cause unpredicted behavior of PixelDecoder module.") - return self.forward_features(features) - - -class TransformerEncoderOnly(nn.Module): - def __init__( - self, - d_model=512, - nhead=8, - num_encoder_layers=6, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - - encoder_layer = TransformerEncoderLayer( - d_model, nhead, dim_feedforward, dropout, activation, normalize_before - ) - encoder_norm = nn.LayerNorm(d_model) if normalize_before else None - self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm) - - self._reset_parameters() - - self.d_model = d_model - self.nhead = nhead - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, src, mask, pos_embed): - # flatten NxCxHxW to HWxNxC - bs, c, h, w = src.shape - src = src.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - if mask is not None: - mask = mask.flatten(1) - - memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed) - return memory.permute(1, 2, 0).view(bs, c, h, w) - - -# This is a modified FPN decoder with extra Transformer encoder that processes the lowest-resolution feature map. -@SEM_SEG_HEADS_REGISTRY.register() -class TransformerEncoderPixelDecoder(BasePixelDecoder): - @configurable - def __init__( - self, - input_shape: Dict[str, ShapeSpec], - *, - transformer_dropout: float, - transformer_nheads: int, - transformer_dim_feedforward: int, - transformer_enc_layers: int, - transformer_pre_norm: bool, - conv_dim: int, - mask_dim: int, - norm: Optional[Union[str, Callable]] = None, - ): - """ - NOTE: this interface is experimental. - Args: - input_shape: shapes (channels and stride) of the input features - transformer_dropout: dropout probability in transformer - transformer_nheads: number of heads in transformer - transformer_dim_feedforward: dimension of feedforward network - transformer_enc_layers: number of transformer encoder layers - transformer_pre_norm: whether to use pre-layernorm or not - conv_dims: number of output channels for the intermediate conv layers. - mask_dim: number of output channels for the final conv layer. - norm (str or callable): normalization for all conv layers - """ - super().__init__(input_shape, conv_dim=conv_dim, mask_dim=mask_dim, norm=norm) - - input_shape = sorted(input_shape.items(), key=lambda x: x[1].stride) - self.in_features = [k for k, v in input_shape] # starting from "res2" to "res5" - feature_strides = [v.stride for k, v in input_shape] - feature_channels = [v.channels for k, v in input_shape] - - in_channels = feature_channels[len(self.in_features) - 1] - self.input_proj = Conv2d(in_channels, conv_dim, kernel_size=1) - weight_init.c2_xavier_fill(self.input_proj) - self.transformer = TransformerEncoderOnly( - d_model=conv_dim, - dropout=transformer_dropout, - nhead=transformer_nheads, - dim_feedforward=transformer_dim_feedforward, - num_encoder_layers=transformer_enc_layers, - normalize_before=transformer_pre_norm, - ) - N_steps = conv_dim // 2 - self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True) - - # update layer - use_bias = norm == "" - output_norm = get_norm(norm, conv_dim) - output_conv = Conv2d( - conv_dim, - conv_dim, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - activation=F.relu, - ) - weight_init.c2_xavier_fill(output_conv) - delattr(self, "layer_{}".format(len(self.in_features))) - self.add_module("layer_{}".format(len(self.in_features)), output_conv) - self.output_convs[0] = output_conv - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = super().from_config(cfg, input_shape) - ret["transformer_dropout"] = cfg.MODEL.MASK_FORMER.DROPOUT - ret["transformer_nheads"] = cfg.MODEL.MASK_FORMER.NHEADS - ret["transformer_dim_feedforward"] = cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD - ret[ - "transformer_enc_layers" - ] = cfg.MODEL.SEM_SEG_HEAD.TRANSFORMER_ENC_LAYERS # a separate config - ret["transformer_pre_norm"] = cfg.MODEL.MASK_FORMER.PRE_NORM - return ret - - def forward_features(self, features): - multi_scale_features = [] - num_cur_levels = 0 - # Reverse feature maps into top-down order (from low to high resolution) - for idx, f in enumerate(self.in_features[::-1]): - x = features[f] - lateral_conv = self.lateral_convs[idx] - output_conv = self.output_convs[idx] - if lateral_conv is None: - transformer = self.input_proj(x) - pos = self.pe_layer(x) - transformer = self.transformer(transformer, None, pos) - y = output_conv(transformer) - # save intermediate feature as input to Transformer decoder - transformer_encoder_features = transformer - else: - cur_fpn = lateral_conv(x) - # Following FPN implementation, we use nearest upsampling here - y = cur_fpn + F.interpolate(y, size=cur_fpn.shape[-2:], mode="nearest") - y = output_conv(y) - if num_cur_levels < self.maskformer_num_feature_levels: - multi_scale_features.append(y) - num_cur_levels += 1 - return self.mask_features(y), transformer_encoder_features, multi_scale_features - - def forward(self, features, targets=None): - logger = logging.getLogger(__name__) - logger.warning("Calling forward() may cause unpredicted behavior of PixelDecoder module.") - return self.forward_features(features) diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/models/__init__.py b/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/models/__init__.py deleted file mode 100644 index 00bde45f003698a5b15d3517ae47b59ef1d86e0c..0000000000000000000000000000000000000000 --- a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/models/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -import importlib -from copy import deepcopy -from os import path as osp - -from basicsr.utils import get_root_logger, scandir -from basicsr.utils.registry import MODEL_REGISTRY - -__all__ = ['build_model'] - -# automatically scan and import model modules for registry -# scan all the files under the 'models' folder and collect files ending with -# '_model.py' -model_folder = osp.dirname(osp.abspath(__file__)) -model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')] -# import all the model modules -_model_modules = [importlib.import_module(f'basicsr.models.{file_name}') for file_name in model_filenames] - - -def build_model(opt): - """Build model from options. - - Args: - opt (dict): Configuration. It must constain: - model_type (str): Model type. - """ - opt = deepcopy(opt) - model = MODEL_REGISTRY.get(opt['model_type'])(opt) - logger = get_root_logger() - logger.info(f'Model [{model.__class__.__name__}] is created.') - return model diff --git a/spaces/shuvojitkoley007/mrs-shuvojit-koley/README.md b/spaces/shuvojitkoley007/mrs-shuvojit-koley/README.md deleted file mode 100644 index 5f80f8dad2fa400d9e938c681e7f7caa5d01be2f..0000000000000000000000000000000000000000 --- a/spaces/shuvojitkoley007/mrs-shuvojit-koley/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Mrs Shuvojit Koley -emoji: 🐢 -colorFrom: pink -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/silencewing/server/youyou/.history/math_20230613230957.html b/spaces/silencewing/server/youyou/.history/math_20230613230957.html deleted file mode 100644 index 93f7761baf1335623e64126c718498e15c32fb58..0000000000000000000000000000000000000000 --- a/spaces/silencewing/server/youyou/.history/math_20230613230957.html +++ /dev/null @@ -1,226 +0,0 @@ - - - - - - - - - - Document - - - - -
      - - - - - - - - - - - - - - - - - - - - - - - - -
      题目
      答案
      正误
      得分
      -
      - - - - diff --git a/spaces/simpie28/VITS-Umamusume-voice-synthesizer/utils.py b/spaces/simpie28/VITS-Umamusume-voice-synthesizer/utils.py deleted file mode 100644 index 9794e0fc3463a5e8fad05c037cce64683059a6d3..0000000000000000000000000000000000000000 --- a/spaces/simpie28/VITS-Umamusume-voice-synthesizer/utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.ERROR) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/4x4 Off Road Rally 9 MOD APK Everything You Need to Know.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/4x4 Off Road Rally 9 MOD APK Everything You Need to Know.md deleted file mode 100644 index a4585145af3eb51d9a503efae5df12ce85698f9d..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/4x4 Off Road Rally 9 MOD APK Everything You Need to Know.md +++ /dev/null @@ -1,99 +0,0 @@ - -

      4x4 Off Road Rally 9 Mod APK: A Thrilling Off-Road Racing Game

      -

      If you are a fan of off-road racing games, you will love 4x4 Off Road Rally 9, a realistic and immersive game that will test your driving skills on various terrains and environments. In this game, you will have to overcome mud, water, snow, rocks, and other obstacles as you race against time and other drivers. You will also have to customize and upgrade your 4x4 vehicle to suit your preferences and needs. But what if you want to enjoy the game without any limitations or restrictions? That's where 4x4 Off Road Rally 9 Mod APK comes in handy. In this article, we will tell you what this modded version of the game offers, how to download and install it, and some tips and tricks to help you master the game.

      -

      What is 4x4 Off Road Rally 9?

      -

      4x4 Off Road Rally 9 is a racing game developed by Electronic Hand, a studio that specializes in off-road games. The game is available for Android devices and has over 10 million downloads on Google Play Store. The game features stunning graphics, realistic physics, various 4x4 vehicles, and different off-road racing challenges.

      -

      4x4 off road rally 9 mod apk


      DOWNLOAD --->>> https://ssurll.com/2uNUhI



      -

      Features of the game

      -

      Some of the features of 4x4 Off Road Rally 9 are:

      -
        -
      • Different modes of gameplay, such as career mode, free mode, time trial mode, and multiplayer mode.
      • -
      • A variety of 4x4 vehicles with different driving characteristics, such as SUVs, trucks, pickups, jeeps, and more.
      • -
      • A wide range of terrains and environments to explore, such as forests, deserts, mountains, swamps, and more.
      • -
      • A realistic driving physics system that simulates the effects of mud, water, snow, rocks, and other obstacles on your vehicle.
      • -
      • An endless tuning and customization system that allows you to modify your vehicle's engine, suspension, tires, wheels, paint, stickers, and more.
      • -
      • A simple and convenient in-game map that shows you the route and the checkpoints.
      • -
      • A real car sound system that enhances the immersion and realism of the game.
      • -
      -

      How to play the game

      -

      The gameplay of 4x4 Off Road Rally 9 is simple but challenging. You have to use the on-screen buttons to control your vehicle's steering, acceleration, braking, and gear shifting. You also have to use the camera button to change the view angle and the map button to see the route. Your goal is to reach the finish line as fast as possible without getting stuck or damaged. You can also compete with other players online or offline in multiplayer mode. You can earn coins and rewards by completing races and challenges. You can use these coins to buy new vehicles or upgrade your existing ones.

      -

      Why download 4x4 Off Road Rally 9 Mod APK?

      -

      Although 4x4 Off Road Rally 9 is a fun and addictive game, it also has some drawbacks. For example, some vehicles and features are locked behind a paywall or require a lot of grinding. You also have to watch ads to get extra coins or rewards. Moreover, some levels are too hard or frustrating to complete. That's why many players prefer to download 4x4 Off Road Rally 9 Mod APK instead of the original version.

      -

      Benefits of the modded version

      -

      Some of the benefits of downloading 4x4 Off Road Rally 9 Mod APK are:

      -
        -
      • You get unlimited coins and gems to buy and upgrade any vehicle you want.
      • -
      • You get all the vehicles and features unlocked from the start.
      • -
      • You get to enjoy the game without any ads or interruptions.
      • -
      • You get to access some exclusive features and options that are not available in the original version.
      • -
      -

      How to download and install the mod APK

      -

      Downloading and installing 4x4 Off Road Rally 9 Mod APK is easy and safe. You just have to follow these steps:

      -
        -
      1. Click on the link below to download the mod APK file.
      2. -
      3. Allow your device to install apps from unknown sources in the settings.
      4. -
      5. Locate and tap on the downloaded file to start the installation process.
      6. -
      7. Follow the instructions on the screen to complete the installation.
      8. -
      9. Launch the game and enjoy!
      10. -
      -

      Download 4x4 Off Road Rally 9 Mod APK here

      -

      Tips and tricks for 4x4 Off Road Rally 9

      -

      If you want to master 4x4 Off Road Rally 9 and become a pro off-road racer, you need to know some tips and tricks that will help you improve your performance and skills. Here are some of them:

      -

      Choose the right vehicle and upgrade it

      -

      One of the most important factors that affect your success in the game is your choice of vehicle. Different vehicles have different strengths and weaknesses, such as speed, acceleration, handling, durability, and fuel consumption. You need to choose a vehicle that suits your style and preference, as well as the terrain and environment of each level. For example, a SUV might be good for rough and rocky roads, but a truck might be better for muddy and slippery roads. You also need to upgrade your vehicle regularly to enhance its performance and capabilities. You can upgrade your engine, suspension, tires, wheels, paint, stickers, and more using the coins and gems you earn in the game.

      -

      4x4 off road rally ultimate mod apk
      -4x4 off road rally 9 hack apk
      -4x4 off road rally 9 unlimited money
      -4x4 off road rally 9 cheats android
      -4x4 off road rally 9 download apk
      -4x4 off road rally 9 mod apk latest version
      -4x4 off road rally 9 free download
      -4x4 off road rally 9 gameplay
      -4x4 off road rally 9 mod menu
      -4x4 off road rally 9 apk obb
      -4x4 off road rally 9 mod apk revdl
      -4x4 off road rally 9 mod apk android 1
      -4x4 off road rally 9 mod apk rexdl
      -4x4 off road rally 9 mod apk happymod
      -4x4 off road rally 9 mod apk an1
      -4x4 off road rally 9 mod apk offline
      -4x4 off road rally 9 mod apk no root
      -4x4 off road rally 9 mod apk unlimited coins
      -4x4 off road rally 9 mod apk unlimited gems
      -4x4 off road rally 9 mod apk all cars unlocked
      -4x4 off road rally 9 mod apk all levels unlocked
      -4x4 off road rally 9 mod apk all vehicles unlocked
      -4x4 off road rally 9 mod apk mega mod
      -4x4 off road rally 9 mod apk premium unlocked
      -4x4 off road rally 9 mod apk pro unlocked
      -how to install 4x4 off road rally 9 mod apk
      -how to play 4x4 off road rally 9 mod apk
      -how to download 4x4 off road rally 9 mod apk
      -how to update 4x4 off road rally 9 mod apk
      -how to hack 4x4 off road rally 9 mod apk
      -how to get unlimited money in 4x4 off road rally 9 mod apk
      -how to get unlimited gems in 4x4 off road rally 9 mod apk
      -how to unlock all cars in 4x4 off road rally 9 mod apk
      -how to unlock all levels in 4x4 off road rally 9 mod apk
      -how to unlock all vehicles in 4x4 off road rally 9 mod apk
      -best cars in 4x4 off road rally 9 mod apk
      -best vehicles in 4x4 off road rally 9 mod apk
      -best levels in 4x4 off road rally 9 mod apk
      -best tips and tricks for playing with the latest version of the game.

      -

      Use the terrain and obstacles to your advantage

      -

      Another factor that affects your success in the game is your ability to adapt to the terrain and obstacles you encounter. You need to use them to your advantage instead of letting them slow you down or damage your vehicle. For example, you can use the ramps and hills to jump over gaps or obstacles, or use the water and snow to cool down your engine or drift around corners. You also need to avoid hitting rocks, trees, fences, or other vehicles that can damage your vehicle or make you lose control. You can use the camera button to change the view angle and see what's ahead of you.

      -

      Anticipate the challenges and plan your strategy

      -

      The last factor that affects your success in the game is your ability to anticipate the challenges and plan your strategy accordingly. You need to know what to expect in each level and how to deal with it effectively. For example, you need to know how long each level is, how many checkpoints there are, what kind of terrain and obstacles there are, what kind of weather conditions there are, and what kind of opponents there are. You also need to know how to manage your time, fuel, damage, and speed. You can use the map button to see the route and the checkpoints. You can also use the pause button to pause the game and adjust your settings or options.

      -

      Conclusion

      -

      4x4 Off Road Rally 9 is a thrilling off-road racing game that will keep you entertained for hours. You can enjoy realistic graphics, physics, sounds, vehicles, terrains, environments, modes, features, and challenges in this game. You can also download 4x4 Off Road Rally 9 Mod APK to get unlimited coins and gems, unlock all vehicles and features, remove ads, and access exclusive features and options. You can also use some tips and tricks to master the game and become a pro off-road racer. So what are you waiting for? Download 4x4 Off Road Rally 9 Mod APK now and have fun!

      -

      FAQs

      -

      Here are some frequently asked questions about 4x4 Off Road Rally 9 Mod APK:

      -
        -
      • Is 4x4 Off Road Rally 9 Mod APK safe?
        Yes, 4x4 Off Road Rally 9 Mod APK is safe to download and install. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source like this one.
      • -
      • Do I need to root my device to install 4x4 Off Road Rally 9 Mod APK?
        No, you do not need to root your device to install 4x4 Off Road Rally 9 Mod APK. You can install it on any Android device without any hassle.
      • -
      • Will 4x4 Off Road Rally 9 Mod APK affect the original version of the game?
        No, 4x4 Off Road Rally 9 Mod APK will not affect the original version of the game. You can have both versions installed on your device and play them separately. However, you should not use the same account or data for both versions, as it may cause some issues or conflicts.
      • -
      • Can I play online with 4x4 Off Road Rally 9 Mod APK?
        Yes, you can play online with 4x4 Off Road Rally 9 Mod APK. You can join or create online rooms and compete with other players from around the world. However, you should be careful not to use any cheats or hacks that may get you banned or reported by other players.
      • -
      • How can I update 4x4 Off Road Rally 9 Mod APK?
        You can update 4x4 Off Road Rally 9 Mod APK by visiting this page and downloading the latest version of the mod APK file. You can then install it over the existing version without losing your progress or data.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Clash of Clans and Join the Epic Clan Wars!.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Clash of Clans and Join the Epic Clan Wars!.md deleted file mode 100644 index ef37016a20875ad35ec6ff9252a8755f53d680f0..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Clash of Clans and Join the Epic Clan Wars!.md +++ /dev/null @@ -1,114 +0,0 @@ - -

      Download Clash of Clans: A Guide for Beginners

      -

      If you are looking for a fun and addictive game that will keep you entertained for hours, you should download Clash of Clans. Clash of Clans is one of the most popular mobile games in the world, with over 500 million downloads and millions of active players. In this article, we will tell you what Clash of Clans is, why you should download it, how to download it, and how to start playing it.

      -

      download clash of clans download


      Download Filehttps://ssurll.com/2uNRT3



      -

      What is Clash of Clans?

      -

      Clash of Clans is a strategy game that lets you build your own village, raise your own clan, and compete in epic clan wars. You can customize your village with different buildings, defenses, traps, and decorations. You can also train different types of troops, such as barbarians, archers, wizards, dragons, and more. You can use your troops to attack other players' villages and loot their resources, or to defend your own village from enemy attacks.

      -

      A strategy game with millions of players

      -

      Clash of Clans is not just a game, it's a community. You can join millions of players from around the world who share your passion for clashing. You can chat with other players, make friends, form alliances, and challenge each other. You can also watch replays of other players' attacks and defenses, learn from their strategies, and improve your own skills.

      -

      A game with different modes and features

      -

      Clash of Clans is a game that never gets boring. It has different modes and features that keep the game fresh and exciting. Some of these modes and features are:

      -
        -
      • Clan Wars: This is the main mode of the game, where you can team up with your clanmates and fight against other clans in a two-day war. The clan with the most stars at the end of the war wins.
      • -
      • Clan War Leagues: This is a competitive mode where you can compete with your clan against seven other clans in a week-long tournament. The clans are divided into different leagues based on their performance. The higher the league, the bigger the rewards.
      • -
      • Clan Games: This is a cooperative mode where you can work together with your clan to complete different tasks and earn points. The more points you earn, the more rewards you unlock.
      • -
      • Builder Base: This is a separate mode where you can build a second village on a mysterious island. You can also train different troops, such as raged barbarians, sneaky archers, boxer giants, and more. You can use your troops to attack other players' builder bases and earn trophies, or to defend your own builder base from enemy attacks.
      • -
      • Friendly Challenges: This is a casual mode where you can challenge your friends or clanmates to attack your village or builder base. You can use this mode to test your defenses or practice your attacks without losing any resources or trophies.
      • -
      • Friendly Wars: This is a fun mode where you can arrange custom wars with other clans of your choice. You can set the rules and parameters of the war, such as the number of players, the duration, and the preparation time.
      • -
      -

      Why should you download Clash of Clans?

      -

      Clash of Clans is a game that has something for everyone. Whether you are a casual gamer or a hardcore gamer, a solo player or a team player, a beginner or an expert, you will find something to enjoy in Clash of Clans. Here are some of the reasons why you should download Clash of Clans:

      -

      It's free to play and download

      -

      One of the best things about Clash of Clans is that it's completely free to play and download. You don't need to pay anything to download the game or to access its features. You can play as much as you want without any limitations. Of course, if you want to speed up your progress or get some extra perks, you can buy some in-game currency called gems with real money. But this is entirely optional and not necessary to enjoy the game.

      -

      It's fun and challenging

      -

      Another reason why you should download Clash of Clans is that it's fun and challenging. You will never get bored with Clash of Clans, as there is always something new to do or to discover. You will face different challenges and obstacles as you progress in the game, such as stronger enemies, tougher bases, and more complex strategies. You will also have to use your creativity and logic to design your own base and plan your own attacks. You will have to balance your resources, troops, buildings, and defenses to achieve your goals.

      -

      It's social and competitive

      -

      A third reason why you should download Clash of Clans is that it's social and competitive. You can join a clan or create your own clan and interact with other players from around the world. You can chat with them, share tips and tricks, donate troops, request troops, and support each other. You can also compete with them in clan wars, clan war leagues, clan games, and leaderboards. You can show off your skills and achievements and earn respect and recognition from your peers.

      -

      How to download Clash of Clans?

      -

      Downloading Clash of Clans is very easy and simple. All you need is a compatible device and an internet connection. Here are the steps to download Clash of Clans for Android and iOS devices:

      -

      download clash of clans for android
      -download clash of clans for pc
      -download clash of clans mod apk
      -download clash of clans hack
      -download clash of clans latest version
      -download clash of clans update
      -download clash of clans private server
      -download clash of clans apk for ios
      -download clash of clans offline
      -download clash of clans on laptop
      -download clash of clans game
      -download clash of clans app
      -download clash of clans free gems
      -download clash of clans magic
      -download clash of clans fhx
      -download clash of clans nulls
      -download clash of clans online
      -download clash of clans windows 10
      -download clash of clans original
      -download clash of clans unlimited money
      -download clash of clans cheat
      -download clash of clans supercell
      -download clash of clans play store
      -download clash of clans new update 2023
      -download clash of clans town hall 14
      -download clash of clans mac
      -download clash of clans builder base
      -download clash of clans old version
      -download clash of clans mod menu
      -download clash of clans apk pure
      -download clash of clans from google play
      -download clash of clans in jio phone
      -download clash of clans without bluestacks
      -download clash of clans on chromebook
      -download clash of clans for windows 7
      -download clash of clans with unlimited troops
      -download clash of clans hacked version 2023
      -download clash of clans mod apk latest version 2023 android 1
      -download clash of clans apk mirror
      -download clash of clans for pc windows 10 64 bit free full version

      -

      For Android devices

      -

      Step 1: Go to Google Play Store

      -

      The first step to download Clash of Clans for Android devices is to go to Google Play Store on your device. You can find it on your home screen or in your app drawer.

      -

      Step 2: Search for Clash of Clans

      -

      The second step is to search for Clash of Clans in the Google Play Store. You can use the search bar at the top of the screen and type "Clash of Clans". You will see a list of results matching your query.

      -

      Step 3: Tap on Install and wait for the download to finish

      -

      The third step is to tap on the Install button next to the Clash of Clans icon. This will start the download process. You will see a progress bar showing how much time is left until the download is complete. Once the download is finished, you will see an Open button instead of the Install button. Tap on it to launch the game.

      -

      For iOS devices

      -

      Step 1: Go to App Store

      -

      The first step to download Clash of Clans for iOS devices is to go to App Store on your device. You can find it on your home screen or in your dock.

      -

      Step 2: Search for Clash of Clans

      -

      The second step is to search for Clash of Clans in the App Store. You can use the search bar at the bottom of the screen and type "Clash of Clans". You will see a list of results matching your query.

      -

      Step 3: Tap on Get and wait for the download to finish

      -

      The third step is to tap on the Get button next to the Clash of Clans icon. This will start the download process. You will see a progress circle showing how much time is left until the download is complete. Once the download is finished, you will see an Open button instead of the Get button. Tap on it to launch the game.

      -

      How to start playing Clash of Clans?

      -

      Now that you have downloaded Clash of Clans, you are ready to start playing it. Here are some tips on how to start playing Clash of Clans:

      -

      Create your village and join a clan

      -

      The first thing you need to do when you start playing Clash of Clans is to create your village. You will be guided by a tutorial that will show you the basics of the game, such as how to build and upgrade your buildings, how to collect and store your resources, and how to protect your village from enemy attacks. You will also be given some free gems, which are the premium currency of the game, that you can use to speed up your progress or buy some special items.

      -

      After you finish the tutorial, you will be able to join a clan or create your own clan. A clan is a group of players who share the same clan name, clan badge, and clan chat. Joining a clan will give you many benefits, such as being able to donate and request troops, participate in clan wars, clan war leagues, and clan games, and access the clan perks and rewards. You can search for a clan that suits your preferences, such as the language, the location, the level, the activity, and the war frequency. You can also invite your friends or family to join your clan or join theirs.

      -

      Build your army and attack other players

      -

      The second thing you need to do when you start playing Clash of Clans is to build your army and attack other players. You can train different types of troops in your barracks, such as barbarians, archers, giants, goblins, wall breakers, balloons, wizards, healers, dragons, and more. You can also unlock and upgrade different types of spells in your spell factory, such as lightning spell, healing spell, rage spell, jump spell, freeze spell, and more. You can also unlock and upgrade different types of heroes in your hero altar, such as the barbarian king, the archer queen, the grand warden, and the royal champion.

      -

      You can use your army to attack other players' villages and loot their resources, such as gold, elixir, dark elixir, and trophies. You can find other players to attack by using the matchmaking system or by using the revenge option. You can also scout their bases before attacking them and plan your strategy accordingly. You can use your spells and heroes to support your troops and boost their performance. You can also use your siege machines to break through their walls and defenses.

      -

      Participate in clan wars and events

      -

      The third thing you need to do when you start playing Clash of Clans is to participate in clan wars and events. Clan wars are the main feature of the game, where you can team up with your clanmates and fight against other clans in a two-day war. The first day is the preparation day, where you can prepare your war base and donate troops to your clan castle. The second day is the battle day, where you can attack two enemy bases and earn stars for your clan. The clan with the most stars at the end of the war wins.

      -

      Clan war leagues are a competitive feature of the game, where you can compete with your clan against seven other clans in a week-long tournament. The clans are divided into different leagues based on their performance. The higher the league, the bigger the rewards. You can earn league medals that you can use to buy exclusive items in the league shop.

      -

      Clan games are a cooperative feature of the game, where you can work together with your clan to complete different tasks and earn points. The more points you earn, the more rewards you unlock. You can choose from different rewards, such as resources, gems, magic items, and more.

      -

      Events are special occasions that happen regularly in the game, where you can enjoy various bonuses and discounts. For example, you can get reduced training time and cost for certain troops or spells, or increased loot and star bonus for certain attacks. You can also earn event points by completing event challenges and exchange them for event rewards.

      -

      Conclusion

      -

      Clash of Clans is a game that will keep you hooked for hours. It is a game that combines strategy, creativity, and fun. It is a game that lets you build your own village, raise your own clan, and compete in epic clan wars. It is a game that has millions of players from around the world who share your passion for clashing. It is a game that is free to play and download, but also offers optional in-app purchases for extra convenience and enjoyment.

      -

      If you are looking for a game that will challenge your mind, entertain your senses, and connect you with others, you should download Clash of Clans today. You will not regret it.

      -

      FAQs

      -

      Here are some of the frequently asked questions about Clash of Clans:

      -
        -
      • Q: How can I save my progress in Clash of Clans?
      • -
      • A: You can save your progress in Clash of Clans by linking your game account to a Google Play account (for Android devices) or an Apple ID account (for iOS devices). This will also allow you to access your game from different devices or restore your game if you lose your device or uninstall the game.
      • -
      • Q: How can I contact the support team of Clash of Clans?
      • -
      • A: You can contact the support team of Clash of Clans by tapping on the settings icon in the game and then tapping on the help and support button. You can also visit the official website of Clash of Clans or the official forums of Clash of Clans for more information and assistance.
      • -
      • Q: How can I get more gems in Clash of Clans?
      • -
      • A: You can get more gems in Clash of Clans by completing achievements, removing obstacles, participating in clan games, winning clan war leagues, or buying them with real money.
      • -
      • Q: How can I upgrade my town hall in Clash of Clans?
      • -
      • A: You can upgrade your town hall in Clash of Clans by collecting enough gold and tapping on the upgrade button on your town hall. Upgrading your town hall will unlock new buildings, troops, spells, and features, but also increase the difficulty of the game.
      • -
      • Q: How can I join a clan in Clash of Clans?
      • -
      • A: You can join a clan in Clash of Clans by tapping on the clan icon in the game and then tapping on the find a clan button. You can search for a clan that suits your preferences or browse through the recommended clans. You can also create your own clan or join a clan that invites you.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy a Magical Adventure with Pony World Craft MOD APK 1.3.6 for Android.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy a Magical Adventure with Pony World Craft MOD APK 1.3.6 for Android.md deleted file mode 100644 index 866eaa0e66b561f479fbb5c8f1bdb77c1979c7fd..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy a Magical Adventure with Pony World Craft MOD APK 1.3.6 for Android.md +++ /dev/null @@ -1,99 +0,0 @@ -
      -

      Pony World Craft Mod APK An1: A Fun and Creative Game for Pony Lovers

      -

      Do you love ponies? Do you want to create your own pony world with unlimited resources and possibilities? If yes, then you should try Pony World Craft Mod APK An1, a modified version of the popular game Pony World Craft. In this article, we will tell you everything you need to know about this game, including what it is, why you should play it, and how to download and install it on your device.

      -

      What is Pony World Craft Mod APK An1?

      -

      Pony World Craft Mod APK An1 is a game that lets you explore, build, and customize your own pony world. You can choose from different types of ponies, such as unicorns, pegasus, or alicorns, and dress them up with various accessories. You can also create your own buildings, farms, gardens, castles, and more with different blocks and materials. You can play in different modes, such as survival, creative, or adventure, and interact with other ponies and animals in the game.

      -

      pony world craft mod apk an1


      DOWNLOAD » https://ssurll.com/2uNTGi



      -

      The original game: Pony World Craft

      -

      The original game, Pony World Craft, was developed by Candy Room Games Rabbitco, a studio that specializes in creating games for kids and families. The game was released in 2020 and has received positive reviews from players and critics. The game is available for free on Google Play Store and has over 10 million downloads.

      -

      The modded version: Pony World Craft Mod APK An1

      -

      The modded version, Pony World Craft Mod APK An1, is a modified version of the original game that offers some extra features and benefits for the players. The modded version was created by an unknown developer and is not affiliated with the original developer. The modded version is not available on Google Play Store and has to be downloaded from a third-party source.

      -

      pony world craft mod apk unlimited money
      -pony world craft mod apk download for android
      -pony world craft mod apk latest version
      -pony world craft mod apk free shopping
      -pony world craft mod apk an1.com
      -pony world craft mod apk hack
      -pony world craft mod apk offline
      -pony world craft mod apk no ads
      -pony world craft mod apk 1.3.6
      -pony world craft mod apk revdl
      -pony world craft mod apk rexdl
      -pony world craft mod apk happymod
      -pony world craft mod apk android 1
      -pony world craft mod apk obb
      -pony world craft mod apk data
      -pony world craft mod apk pure
      -pony world craft mod apk uptodown
      -pony world craft mod apk apkpure
      -pony world craft mod apk mob.org
      -pony world craft mod apk apkmody
      -pony world craft mod apk an1.ru
      -pony world craft mod apk an1.net
      -pony world craft mod apk an1.co
      -pony world craft mod apk an1.in
      -pony world craft mod apk an1.me
      -pony world craft mod apk an1.io
      -pony world craft mod apk an1.cc
      -pony world craft mod apk an1.tv
      -pony world craft mod apk an1.fun
      -pony world craft mod apk an1.pro
      -pony world craft mod apk an1.site
      -pony world craft mod apk an1.club
      -pony world craft mod apk an1.xyz
      -pony world craft mod apk an1.online
      -pony world craft mod apk an1.store
      -pony world craft mod apk an1.app
      -pony world craft mod apk an1.games
      -pony world craft mod apk an1.tech
      -pony world craft mod apk an1.news
      -pony world craft mod apk an1.info

      -

      Why should you play Pony World Craft Mod APK An1?

      -

      There are many reasons why you should play Pony World Craft Mod APK An1 instead of the original game. Here are some of them:

      -

      Unlimited money, free purchase, and free craft

      -

      One of the main advantages of playing the modded version is that you get unlimited money in the game. You can use this money to buy anything you want in the game store without worrying about the cost. You can also get free purchase and free craft features that allow you to get any item or block in the game without spending any resources or materials.

      -

      Cute and colorful graphics

      -

      Another reason why you should play the modded version is that it has cute and colorful graphics that will appeal to anyone who loves ponies. The game has a bright and cheerful atmosphere that will make you feel happy and relaxed. The game also has smooth animations and sound effects that enhance the gameplay experience.

      -

      Various modes and activities

      -

      A third reason why you should play the modded version is that it has various modes and activities that will keep you entertained for hours. You can play in survival mode where you have to gather resources, craft items, and fight enemies. You can play in creative mode where you have unlimited resources and can build anything you want. You can play in adventure mode where you can explore different maps and quests. You can also play with your friends online or offline and chat with them in the game.

      -

      How to download and install Pony World Craft Mod APK An1?

      -

      If you want to play Pony World Craft Mod APK An1, you need to download and install it on your device. Here are the steps you need to follow:

      -

      Download the mod apk file from a trusted source

      -

      The first step is to download the mod apk file from a trusted source. You can search for the file on the internet or use the link below to download it directly. The file size is about 40 MB and the latest version is 1.0.0.

      -

      Download Pony World Craft Mod APK An1

      -

      Enable unknown sources on your device

      -

      The second step is to enable unknown sources on your device. This is necessary because the mod apk file is not from Google Play Store and your device may block its installation. To enable unknown sources, go to your device settings, security, and toggle on the option that says "allow installation of apps from unknown sources".

      -

      Install the mod apk file and enjoy the game

      -

      The third and final step is to install the mod apk file and enjoy the game. To install the file, locate it in your device storage and tap on it. Follow the instructions on the screen and wait for the installation to complete. Once done, you can launch the game from your app drawer or home screen and start playing.

      -

      Conclusion

      -

      Pony World Craft Mod APK An1 is a fun and creative game for pony lovers who want to create their own pony world with unlimited resources and possibilities. The game has cute and colorful graphics, various modes and activities, and unlimited money, free purchase, and free craft features. The game is easy to download and install on your device with a few simple steps. If you are looking for a game that will make you happy and relaxed, you should try Pony World Craft Mod APK An1 today.

      -

      FAQs

      -

      Here are some frequently asked questions about Pony World Craft Mod APK An1:

      - - - - - - - - - - - - - - - - - - - - - - - - - -
      QuestionAnswer
      Is Pony World Craft Mod APK An1 safe to play?Yes, Pony World Craft Mod APK An1 is safe to play as long as you download it from a trusted source. However, you should be careful about granting permissions to the app and avoid sharing any personal information in the game.
      Is Pony World Craft Mod APK An1 compatible with my device?Pony World Craft Mod APK An1 is compatible with most Android devices that have Android 4.4 or higher. However, some devices may experience lag or crashes due to low specifications or compatibility issues.
      Can I play Pony World Craft Mod APK An1 offline?Yes, you can play Pony World Craft Mod APK An1 offline without an internet connection. However, some features such as online multiplayer mode or chat may not work offline.
      Can I update Pony World Craft Mod APK An1?No, you cannot update Pony World Craft Mod APK An1 from Google Play Store or any other source. If you want to update the game, you need to uninstall the modded version and install the latest version of the original game or another modded version.
      Can I play Pony World Craft Mod APK An1 with my friends?Yes, you can play Pony World Craft Mod APK An1 with your friends online or offline. You can join or create a server in the game and invite your friends to join you. You can also chat with them in the game using text or voice messages.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/classification/finetune_classification_bert-3.9B_cmnli.sh b/spaces/skf15963/summary/fengshen/examples/classification/finetune_classification_bert-3.9B_cmnli.sh deleted file mode 100644 index da10752cff77be9462d17cbb45882543a5e0ed48..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/classification/finetune_classification_bert-3.9B_cmnli.sh +++ /dev/null @@ -1,161 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=slurm-test # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=2 # total number of tasks across all nodes -#SBATCH --cpus-per-task=16 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --mem-per-cpu=8G # memory per cpu-core (4G is default) -#SBATCH --gres=gpu:2 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. - - -export TORCH_EXTENSIONS_DIR=/cognitive_comp/yangping/cache/torch_extendsions - -BERT_NAME=bert-3.9B - -TASK=cmnli -TEXTA_NAME=sentence1 -TEXTB_NAME=sentence2 -LABEL_NAME=label -ID_NAME=id - - -BATCH_SIZE=16 -VAL_BATCH_SIZE=56 -ZERO_STAGE=2 - - -ROOT_PATH=cognitive_comp -DATA_DIR=/$ROOT_PATH/yangping/data/ChineseCLUE_DATA/${TASK}_public/ -PRETRAINED_MODEL_PATH=/$ROOT_PATH/yangping/pretrained_model/$BERT_NAME/ - - -CHECKPOINT_PATH=/$ROOT_PATH/yangping/checkpoints/fengshen-finetune/$TASK/ -DEFAULT_ROOT_DIR=/cognitive_comp/yangping/nlp/fengshen/fengshen/scripts/log/$TASK/$BERT_NAME/ -OUTPUT_PATH=/$ROOT_PATH/yangping/nlp/modelevaluation/output/${TASK}_predict.json - - -config_json="./ds_config.json" -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -# reduce_bucket_size: hidden_size*hidden_size -# stage3_prefetch_bucket_size: 0.9 * hidden_size * hidden_size -# stage3_param_persistence_threshold: 10 * hidden_size - -cat < $config_json -{ - "train_micro_batch_size_per_gpu": $BATCH_SIZE, - "steps_per_print": 100, - "gradient_clipping": 1.0, - "zero_optimization": { - "stage": 3, - "offload_optimizer": { - "device": "cpu", - "pin_memory": true - }, - "offload_param": { - "device": "cpu", - "pin_memory": true - }, - "overlap_comm": true, - "contiguous_gradients": true, - "sub_group_size": 1e9, - "reduce_bucket_size": 6553600, - "stage3_prefetch_bucket_size": 5898240, - "stage3_param_persistence_threshold": 25600, - "stage3_max_live_parameters": 1e9, - "stage3_max_reuse_distance": 1e9, - "stage3_gather_fp16_weights_on_model_save": true - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 1e-6, - "betas": [ - 0.9, - 0.95 - ], - "eps": 1e-8, - "weight_decay": 1e-3 - } - }, - "scheduler": { - "type": "WarmupLR", - "params":{ - "warmup_min_lr": 5e-8, - "warmup_max_lr": 1e-6 - } - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json - - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.json \ - --valid_data dev.json \ - --test_data test.json \ - --train_batchsize $BATCH_SIZE \ - --valid_batchsize $VAL_BATCH_SIZE \ - --max_length 128 \ - --texta_name $TEXTA_NAME \ - --textb_name $TEXTB_NAME \ - --label_name $LABEL_NAME \ - --id_name $ID_NAME \ - " - -MODEL_ARGS="\ - --learning_rate 0.000001 \ - --weight_decay 0.001 \ - --warmup 0.001 \ - --num_labels 3 \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_acc \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 100 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_acc:.4f} \ - " -TRAINER_ARGS="\ - --max_epochs 7 \ - --gpus 2 \ - --strategy deepspeed_stage_3 \ - --precision 16 \ - --gradient_clip_val 0.1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 100 \ - --default_root_dir $DEFAULT_ROOT_DIR \ - " - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ - " - -DOCKER_PATH=/$ROOT_PATH/yangping/containers/pytorch21_06_py3_docker_image.sif -SCRIPT_PATH=/$ROOT_PATH/yangping/nlp/fengshen/fengshen/examples/finetune_classification.py - -# python3 $SCRIPT_PATH $options -srun singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $DOCKER_PATH python3 $SCRIPT_PATH $options - diff --git a/spaces/sklearn-docs/Comparison_K_Means_MiniBatchKMeans/README.md b/spaces/sklearn-docs/Comparison_K_Means_MiniBatchKMeans/README.md deleted file mode 100644 index 4792b5b20777ead22e2608e672787f0078dd99ba..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Comparison_K_Means_MiniBatchKMeans/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Comparison K Means MiniBatchKMeans -emoji: 🔥 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/spock74/whisper-webui/app-network.py b/spaces/spock74/whisper-webui/app-network.py deleted file mode 100644 index 7605c4b126dfc7dac188dce38551ca8ae84d67db..0000000000000000000000000000000000000000 --- a/spaces/spock74/whisper-webui/app-network.py +++ /dev/null @@ -1,3 +0,0 @@ -# Run the app with no audio file restrictions, and make it available on the network -from app import create_ui -create_ui(-1, server_name="0.0.0.0") \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/criss/mining/mine.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/criss/mining/mine.py deleted file mode 100644 index c872da196fe0df776622365748ad7963fee1f0a0..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/criss/mining/mine.py +++ /dev/null @@ -1,240 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import argparse -import glob -from subprocess import check_call - -try: - import faiss - - has_faiss = True -except ImportError: - has_faiss = False -import numpy as np - - -GB = 1024 * 1024 * 1024 - - -def call(cmd): - print(cmd) - check_call(cmd, shell=True) - - -def get_batches(directory, lang, prefix="all_avg_pool"): - print(f"Finding in {directory}/{prefix}.{lang}*") - files = glob.glob(f"{directory}/{prefix}.{lang}*") - emb_files = [] - txt_files = [] - for emb_fi in files: - emb_files.append(emb_fi) - txt_fi = emb_fi.replace(prefix, "sentences") - txt_files.append(txt_fi) - return emb_files, txt_files - - -def load_batch(emb_file, dim): - embeddings = np.fromfile(emb_file, dtype=np.float32) - num_rows = int(embeddings.shape[0] / dim) - embeddings = embeddings.reshape((num_rows, dim)) - faiss.normalize_L2(embeddings) - return embeddings - - -def knnGPU_sharded(x_batches_f, y_batches_f, dim, k, direction="x2y"): - if not has_faiss: - raise ImportError("Please install Faiss") - sims = [] - inds = [] - xfrom = 0 - xto = 0 - for x_batch_f in x_batches_f: - yfrom = 0 - yto = 0 - x_batch = load_batch(x_batch_f, dim) - xto = xfrom + x_batch.shape[0] - bsims, binds = [], [] - for y_batch_f in y_batches_f: - y_batch = load_batch(y_batch_f, dim) - neighbor_size = min(k, y_batch.shape[0]) - yto = yfrom + y_batch.shape[0] - print("{}-{} -> {}-{}".format(xfrom, xto, yfrom, yto)) - idx = faiss.IndexFlatIP(dim) - idx = faiss.index_cpu_to_all_gpus(idx) - idx.add(y_batch) - bsim, bind = idx.search(x_batch, neighbor_size) - - bsims.append(bsim) - binds.append(bind + yfrom) - yfrom += y_batch.shape[0] - del idx - del y_batch - bsims = np.concatenate(bsims, axis=1) - binds = np.concatenate(binds, axis=1) - aux = np.argsort(-bsims, axis=1) - sim_batch = np.zeros((x_batch.shape[0], k), dtype=np.float32) - ind_batch = np.zeros((x_batch.shape[0], k), dtype=np.int64) - for i in range(x_batch.shape[0]): - for j in range(k): - sim_batch[i, j] = bsims[i, aux[i, j]] - ind_batch[i, j] = binds[i, aux[i, j]] - sims.append(sim_batch) - inds.append(ind_batch) - xfrom += x_batch.shape[0] - del x_batch - sim = np.concatenate(sims, axis=0) - ind = np.concatenate(inds, axis=0) - return sim, ind - - -def score(sim, fwd_mean, bwd_mean, margin): - return margin(sim, (fwd_mean + bwd_mean) / 2) - - -def score_candidates( - sim_mat, candidate_inds, fwd_mean, bwd_mean, margin, verbose=False -): - print(" - scoring {:d} candidates".format(sim_mat.shape[0])) - scores = np.zeros(candidate_inds.shape) - for i in range(scores.shape[0]): - for j in range(scores.shape[1]): - k = int(candidate_inds[i, j]) - scores[i, j] = score(sim_mat[i, j], fwd_mean[i], bwd_mean[k], margin) - return scores - - -def load_text(files): - all_sentences = [] - for fi in files: - with open(fi) as sentence_fi: - for line in sentence_fi: - all_sentences.append(line.strip()) - print(f"Read {len(all_sentences)} sentences") - return all_sentences - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Mine bitext") - parser.add_argument("--src-lang", help="Source language") - parser.add_argument("--tgt-lang", help="Target language") - parser.add_argument( - "--dict-path", help="Path to dictionary file", default="dict.txt" - ) - parser.add_argument( - "--spm-path", help="Path to SPM model file", default="sentence.bpe.model" - ) - parser.add_argument("--dim", type=int, default=1024, help="Embedding dimension") - parser.add_argument("--mem", type=int, default=5, help="Memory in GB") - parser.add_argument("--src-dir", help="Source directory") - parser.add_argument("--tgt-dir", help="Target directory") - parser.add_argument("--output", help="Output path") - parser.add_argument( - "--neighborhood", type=int, default=4, help="Embedding dimension" - ) - parser.add_argument( - "--threshold", type=float, default=1.06, help="Threshold on mined bitext" - ) - parser.add_argument( - "--valid-size", - type=int, - default=2000, - help="Number of sentences used for validation set", - ) - parser.add_argument( - "--min-count", - type=int, - default=50000, - help="Min num sentences used for each language", - ) - args = parser.parse_args() - - x_batches_f, x_sents_f = get_batches(args.src_dir, args.src_lang) - y_batches_f, y_sents_f = get_batches(args.tgt_dir, args.tgt_lang) - margin = lambda a, b: a / b - y2x_sim, y2x_ind = knnGPU_sharded( - y_batches_f, x_batches_f, args.dim, args.neighborhood, direction="y2x" - ) - x2y_sim, x2y_ind = knnGPU_sharded( - x_batches_f, y_batches_f, args.dim, args.neighborhood, direction="x2y" - ) - - x2y_mean = x2y_sim.mean(axis=1) - y2x_mean = y2x_sim.mean(axis=1) - fwd_scores = score_candidates(x2y_sim, x2y_ind, x2y_mean, y2x_mean, margin) - bwd_scores = score_candidates(y2x_sim, y2x_ind, y2x_mean, x2y_mean, margin) - fwd_best = x2y_ind[np.arange(x2y_sim.shape[0]), fwd_scores.argmax(axis=1)] - bwd_best = y2x_ind[np.arange(y2x_sim.shape[0]), bwd_scores.argmax(axis=1)] - indices = np.stack( - ( - np.concatenate((np.arange(x2y_ind.shape[0]), bwd_best)), - np.concatenate((fwd_best, np.arange(y2x_ind.shape[0]))), - ), - axis=1, - ) - scores = np.concatenate((fwd_scores.max(axis=1), bwd_scores.max(axis=1))) - - x_sentences = load_text(x_sents_f) - y_sentences = load_text(y_sents_f) - - threshold = args.threshold - min_count = args.min_count - seen_src, seen_trg = set(), set() - directory = args.output - call(f"mkdir -p {directory}") - src_out = open( - f"{directory}/all.{args.src_lang}", - mode="w", - encoding="utf-8", - errors="surrogateescape", - ) - tgt_out = open( - f"{directory}/all.{args.tgt_lang}", - mode="w", - encoding="utf-8", - errors="surrogateescape", - ) - scores_out = open( - f"{directory}/all.scores", mode="w", encoding="utf-8", errors="surrogateescape" - ) - count = 0 - for i in np.argsort(-scores): - src_ind, trg_ind = indices[i] - if src_ind not in seen_src and trg_ind not in seen_trg: - seen_src.add(src_ind) - seen_trg.add(trg_ind) - if scores[i] > threshold or count < min_count: - if x_sentences[src_ind]: - print(scores[i], file=scores_out) - print(x_sentences[src_ind], file=src_out) - print(y_sentences[trg_ind], file=tgt_out) - count += 1 - else: - print(f"Ignoring sentence: {x_sentences[src_ind]}") - src_out.close() - tgt_out.close() - scores_out.close() - - print(f"Found {count} pairs for threshold={threshold}") - with open(f"{directory}/all.{args.src_lang}") as all_s, open( - f"{directory}/all.{args.tgt_lang}" - ) as all_t, open(f"{directory}/valid.{args.src_lang}", "w") as valid_s, open( - f"{directory}/valid.{args.tgt_lang}", "w" - ) as valid_t, open( - f"{directory}/train.{args.src_lang}", "w" - ) as train_s, open( - f"{directory}/train.{args.tgt_lang}", "w" - ) as train_t: - count = 0 - for s_line, t_line in zip(all_s, all_t): - s_line = s_line.split("\t")[1] - t_line = t_line.split("\t")[1] - if count >= args.valid_size: - train_s.write(s_line) - train_t.write(t_line) - else: - valid_s.write(s_line) - valid_t.write(t_line) - count += 1 diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_iterators.py b/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_iterators.py deleted file mode 100644 index 7b3dd4848553357e5e8326ed3a31cf5d68ceea94..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_iterators.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -from fairseq.data import iterators - - -class TestIterators(unittest.TestCase): - def test_counting_iterator_index(self, ref=None, itr=None): - # Test the indexing functionality of CountingIterator - if ref is None: - assert itr is None - ref = list(range(10)) - itr = iterators.CountingIterator(ref) - else: - assert len(ref) == 10 - assert itr is not None - - self.assertTrue(itr.has_next()) - self.assertEqual(itr.n, 0) - self.assertEqual(next(itr), ref[0]) - self.assertEqual(itr.n, 1) - self.assertEqual(next(itr), ref[1]) - self.assertEqual(itr.n, 2) - itr.skip(3) - self.assertEqual(itr.n, 5) - self.assertEqual(next(itr), ref[5]) - itr.skip(2) - self.assertEqual(itr.n, 8) - self.assertEqual(list(itr), [ref[8], ref[9]]) - self.assertFalse(itr.has_next()) - - def test_counting_iterator_length_mismatch(self): - ref = list(range(10)) - # When the underlying iterable is longer than the CountingIterator, - # the remaining items in the iterable should be ignored - itr = iterators.CountingIterator(ref, total=8) - self.assertEqual(list(itr), ref[:8]) - # When the underlying iterable is shorter than the CountingIterator, - # raise an IndexError when the underlying iterable is exhausted - itr = iterators.CountingIterator(ref, total=12) - self.assertRaises(IndexError, list, itr) - - def test_counting_iterator_take(self): - # Test the "take" method of CountingIterator - ref = list(range(10)) - itr = iterators.CountingIterator(ref) - itr.take(5) - self.assertEqual(len(itr), len(list(iter(itr)))) - self.assertEqual(len(itr), 5) - - itr = iterators.CountingIterator(ref) - itr.take(5) - self.assertEqual(next(itr), ref[0]) - self.assertEqual(next(itr), ref[1]) - itr.skip(2) - self.assertEqual(next(itr), ref[4]) - self.assertFalse(itr.has_next()) - - def test_grouped_iterator(self): - # test correctness - x = list(range(10)) - itr = iterators.GroupedIterator(x, 1) - self.assertEqual(list(itr), [[0], [1], [2], [3], [4], [5], [6], [7], [8], [9]]) - itr = iterators.GroupedIterator(x, 4) - self.assertEqual(list(itr), [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9]]) - itr = iterators.GroupedIterator(x, 5) - self.assertEqual(list(itr), [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) - - # test the GroupIterator also works correctly as a CountingIterator - x = list(range(30)) - ref = list(iterators.GroupedIterator(x, 3)) - itr = iterators.GroupedIterator(x, 3) - self.test_counting_iterator_index(ref, itr) - - def test_sharded_iterator(self): - # test correctness - x = list(range(10)) - itr = iterators.ShardedIterator(x, num_shards=1, shard_id=0) - self.assertEqual(list(itr), x) - itr = iterators.ShardedIterator(x, num_shards=2, shard_id=0) - self.assertEqual(list(itr), [0, 2, 4, 6, 8]) - itr = iterators.ShardedIterator(x, num_shards=2, shard_id=1) - self.assertEqual(list(itr), [1, 3, 5, 7, 9]) - itr = iterators.ShardedIterator(x, num_shards=3, shard_id=0) - self.assertEqual(list(itr), [0, 3, 6, 9]) - itr = iterators.ShardedIterator(x, num_shards=3, shard_id=1) - self.assertEqual(list(itr), [1, 4, 7, None]) - itr = iterators.ShardedIterator(x, num_shards=3, shard_id=2) - self.assertEqual(list(itr), [2, 5, 8, None]) - - # test CountingIterator functionality - x = list(range(30)) - ref = list(iterators.ShardedIterator(x, num_shards=3, shard_id=0)) - itr = iterators.ShardedIterator(x, num_shards=3, shard_id=0) - self.test_counting_iterator_index(ref, itr) - - def test_counting_iterator_buffered_iterator_take(self): - ref = list(range(10)) - buffered_itr = iterators.BufferedIterator(2, ref) - itr = iterators.CountingIterator(buffered_itr) - itr.take(5) - self.assertEqual(len(itr), len(list(iter(itr)))) - self.assertEqual(len(itr), 5) - - buffered_itr = iterators.BufferedIterator(2, ref) - itr = iterators.CountingIterator(buffered_itr) - itr.take(5) - self.assertEqual(len(buffered_itr), 5) - self.assertEqual(len(list(iter(buffered_itr))), 5) - - buffered_itr = iterators.BufferedIterator(2, ref) - itr = iterators.CountingIterator(buffered_itr) - itr.take(5) - self.assertEqual(next(itr), ref[0]) - self.assertEqual(next(itr), ref[1]) - itr.skip(2) - self.assertEqual(next(itr), ref[4]) - self.assertFalse(itr.has_next()) - self.assertRaises(StopIteration, next, buffered_itr) - - ref = list(range(4, 10)) - buffered_itr = iterators.BufferedIterator(2, ref) - itr = iterators.CountingIterator(buffered_itr, start=4) - itr.take(5) - self.assertEqual(len(itr), 5) - self.assertEqual(len(buffered_itr), 1) - self.assertEqual(next(itr), ref[0]) - self.assertFalse(itr.has_next()) - self.assertRaises(StopIteration, next, buffered_itr) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/starlit7/KorPoliticsTTS/models.py b/spaces/starlit7/KorPoliticsTTS/models.py deleted file mode 100644 index fe004e94bbe9074ec736f14325268f4515a53420..0000000000000000000000000000000000000000 --- a/spaces/starlit7/KorPoliticsTTS/models.py +++ /dev/null @@ -1,540 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - if self.n_vocab != 0: - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - if self.n_vocab != 0: - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 1: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 1: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 1, "n_speakers have to be larger than 1." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) diff --git a/spaces/stomexserde/gpt4-ui/Examples/Can I Download Netflix Movies To Watch Offline On A Macbook Pro.md b/spaces/stomexserde/gpt4-ui/Examples/Can I Download Netflix Movies To Watch Offline On A Macbook Pro.md deleted file mode 100644 index aea28e10b92f428f6fb18fbe1b70273ff116b0c5..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Can I Download Netflix Movies To Watch Offline On A Macbook Pro.md +++ /dev/null @@ -1,24 +0,0 @@ - -

      Can I Download Netflix Movies To Watch Offline On A Macbook Pro?

      -

      If you are a Netflix subscriber and you own a Macbook Pro, you might be wondering if you can download Netflix movies to watch offline on your device. After all, downloading Netflix content can be very convenient when you are traveling, have a slow internet connection, or want to save data. Unfortunately, the answer is not so simple.

      -

      Can I Download Netflix Movies To Watch Offline On A Macbook Pro


      DOWNLOAD 🌟 https://urlgoal.com/2uI7Et



      -

      Netflix does not have a native app for macOS, which means you cannot use the official Netflix app to download titles to watch offline on a Macbook Pro. You can only stream Netflix content on a Macbook Pro using a web browser such as Safari, Chrome, or Microsoft Edge. However, these browsers do not support offline playback of Netflix content either.

      -

      So, is there any way to watch Netflix offline on a Macbook Pro? Well, there are some workarounds that you can try, but they are not very straightforward or convenient. Here are some of the options you have:

      -
        -
      • Use a virtual machine (VM) software such as Parallels to run Windows on your Macbook Pro and access the Windows Netflix app. The Windows Netflix app allows you to download Netflix content to watch offline on your device. However, this method requires you to install and run another operating system on your Macbook Pro, which can be costly, complicated, and time-consuming.
      • -
      • Use an iPhone or iPad with the Netflix app to download Netflix content to watch offline on your device. Then, use AirPlay to mirror your iOS device screen to your Macbook Pro using an app such as AirServer or Reflector. However, this method requires you to have another device with enough storage space and battery life to download and stream Netflix content. Also, the video quality and audio sync might not be optimal when using AirPlay.
      • -
      -

      As you can see, neither of these methods is very easy or ideal for watching Netflix offline on a Macbook Pro. Hopefully, Netflix will release a native app for macOS in the future that will enable offline playback of Netflix content on Mac devices. Until then, you might have to settle for streaming Netflix online or using one of the workarounds mentioned above.

      -

      - -

      Why would you want to watch Netflix offline on a Macbook Pro? Well, there are many benefits of downloading Netflix content to watch offline, such as:

      -
        -
      • You can watch your favorite shows and movies anytime, anywhere, without relying on an internet connection.
      • -
      • You can save data and avoid buffering issues when you are on a slow or unstable network.
      • -
      • You can enjoy high-quality video and audio without compromising on the resolution or sound quality.
      • -
      • You can manage your downloads and storage space according to your preferences and needs.
      • -
      • You can avoid spoilers and stay updated with the latest releases on Netflix.
      • -
      -

      Watching Netflix offline on a Macbook Pro can enhance your viewing experience and give you more flexibility and convenience. However, as we have seen, it is not an easy task to accomplish. Hopefully, Netflix will make it easier for Mac users to download and watch Netflix content offline in the future.

      cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Haider [2014-MP3-VBR-320Kbps] ? MN !!TOP!!.md b/spaces/stomexserde/gpt4-ui/Examples/Haider [2014-MP3-VBR-320Kbps] ? MN !!TOP!!.md deleted file mode 100644 index ce853559e2039244e2dc6295d269985ac9b627d9..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Haider [2014-MP3-VBR-320Kbps] ? MN !!TOP!!.md +++ /dev/null @@ -1,20 +0,0 @@ -
      -

      Haider: A Musical Masterpiece by Vishal Bhardwaj

      -

      Haider is a 2014 Indian Hindi-language crime drama film directed by Vishal Bhardwaj, who also composed the music for the film. The film is based on William Shakespeare's tragedy Hamlet, and is set in Kashmir during the 1990s. The film stars Shahid Kapoor as Haider, a young man who returns to his hometown to find out what happened to his father, who disappeared after being arrested by the Indian army. The film also features Tabu, Shraddha Kapoor, Kay Kay Menon and Irrfan Khan in supporting roles.

      -

      The soundtrack of Haider consists of nine songs, written by Gulzar and sung by various artists such as Arijit Singh, Rekha Bhardwaj, Sukhwinder Singh and Vishal Bhardwaj himself. The songs range from classical to rock, and reflect the mood and theme of the film. The album was released by Junglee Music on 18 September 2014, and received critical acclaim from music critics and listeners alike. The album was also nominated for several awards, including the Filmfare Award for Best Music Director.

      -

      Haider [2014-MP3-VBR-320Kbps] – MN


      Download ○○○ https://urlgoal.com/2uI8sg



      -

      Some of the popular songs from the album are:

      -
        -
      • Aao Na: A rock song that expresses Haider's anger and frustration at the situation in Kashmir. The song is sung by Vishal Dadlani and features electric guitar riffs and drums.
      • -
      • Bismil: A qawwali song that narrates the story of Haider's father's murder by his uncle. The song is sung by Sukhwinder Singh and features a chorus of singers and traditional instruments.
      • -
      • Gulon Mein Rang Bhare: A ghazal song that is a tribute to the late poet Faiz Ahmed Faiz, whose poem of the same name is used as the lyrics. The song is sung by Arijit Singh and features a soothing melody and orchestration.
      • -
      • Jhelum: An instrumental song that depicts the river Jhelum, which flows through Kashmir. The song is composed by Vishal Bhardwaj and features a haunting cello solo.
      • -
      -

      The album of Haider is available for download in MP3 format with VBR 320Kbps quality from various websites such as PagalWorld[^1^] [^2^], Google Drive[^3^], Telegram[^4^] and Fancli[^5^]. The album is also available for streaming on platforms such as YouTube, Spotify, Gaana and JioSaavn.

      - -

      Haider is not only a musical masterpiece, but also a cinematic one. The film received widespread praise from critics and audiences for its direction, screenplay, cinematography, editing, acting and themes. The film explores the complex issues of identity, politics, morality and justice in the backdrop of the Kashmir conflict. The film also draws parallels between the characters and situations of Hamlet and the contemporary reality of Kashmir. The film is considered to be one of the best adaptations of Shakespeare's works, and one of the finest films of Indian cinema.

      -

      The film was also a commercial success, grossing over ₹69 crore worldwide against a budget of ₹37 crore. The film won five National Film Awards, including Best Music Direction, Best Dialogue and Best Male Playback Singer. The film also won five Filmfare Awards, including Best Actor, Best Actress and Best Screenplay. The film was also screened at various international film festivals, such as the Busan International Film Festival, the Rome Film Festival and the Toronto International Film Festival.

      -

      Haider is a film that deserves to be watched and listened to by everyone who appreciates good cinema and music. The film is a rare example of how art can transcend boundaries and touch the hearts and minds of people. The film is a testament to the talent and vision of Vishal Bhardwaj, who has created a musical masterpiece that will be remembered for years to come.

      -

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Kal Ho Na Ho 720p Mkv 68 !EXCLUSIVE!.md b/spaces/stomexserde/gpt4-ui/Examples/Kal Ho Na Ho 720p Mkv 68 !EXCLUSIVE!.md deleted file mode 100644 index df99f4b073ee1e5940cbbbd666e3c426915301c4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Kal Ho Na Ho 720p Mkv 68 !EXCLUSIVE!.md +++ /dev/null @@ -1,25 +0,0 @@ - -I can try to write a title and an article with SEO optimization and HTML formatting for the keyword "Kal Ho Na Ho 720p Mkv 68". Here is what I came up with: - -

      Kal Ho Na Ho 720p Mkv 68: How to Download and Watch the Bollywood Classic

      -

      Kal Ho Na Ho is a 2003 romantic comedy-drama film starring Shah Rukh Khan, Preity Zinta and Saif Ali Khan. The film tells the story of Naina, a pessimistic MBA student who falls in love with her neighbor Aman, who has a secret that will change their lives forever. Kal Ho Na Ho was a critical and commercial success, winning several awards and becoming one of the highest-grossing Bollywood films of all time.

      -

      Kal Ho Na Ho 720p Mkv 68


      Download File ✓✓✓ https://urlgoal.com/2uI6rc



      -

      If you are a fan of Kal Ho Na Ho or want to watch it for the first time, you might be wondering how to download and watch it in high quality. One of the options is to look for the file format Kal Ho Na Ho 720p Mkv 68. This format offers a good balance between video resolution, file size and compatibility. In this article, we will explain what Kal Ho Na Ho 720p Mkv 68 is and how to find and download it safely and legally.

      -

      What is Kal Ho Na Ho 720p Mkv 68?

      -

      Kal Ho Na Ho 720p Mkv 68 is a file name that indicates the following characteristics of the video file:

      -
        -
      • Kal Ho Na Ho: The name of the movie.
      • -
      • 720p: The video resolution, which is 1280 x 720 pixels. This is considered high-definition (HD) quality, which offers clear and sharp images.
      • -
      • Mkv: The file extension, which stands for Matroska Video. This is a container format that can store multiple audio and video tracks, subtitles and metadata. Mkv files are compatible with many media players and devices.
      • -
      • 68: The file size in megabytes (MB). This is a relatively small file size for a HD movie, which means it can be downloaded faster and take up less storage space.
      • -
      -

      Therefore, Kal Ho Na Ho 720p Mkv 68 is a HD video file of the movie Kal Ho Na Ho that has a small file size and can be played on various devices.

      -

      How to find and download Kal Ho Na Ho 720p Mkv 68?

      -

      There are many websites that offer Kal Ho Na Ho 720p Mkv 68 for download, but not all of them are safe and legal. Some of them may contain malware, viruses or spyware that can harm your device or compromise your privacy. Some of them may also violate the copyright laws and infringe on the rights of the creators and distributors of the movie.

      -

      To avoid these risks, you should only download Kal Ho Na Ho 720p Mkv 68 from trusted and authorized sources. One of them is Amazon Prime Video, which allows you to rent or buy the movie in HD quality and download it to your device for offline viewing. You can also stream it online if you have a stable internet connection. Amazon Prime Video offers a free trial for new users, so you can try it out before committing to a subscription.

      -

      -

      Another option is to use a VPN service that can mask your IP address and location and allow you to access geo-restricted content. For example, you can use a VPN to connect to an Indian server and access Netflix India, which has Kal Ho Na Ho available for streaming in HD quality. You can also download it to your device using the Netflix app if you have enough storage space. However, you should be aware that using a VPN may violate the terms of service of some streaming platforms and may result in account suspension or termination.

      -

      Conclusion

      -

      Kal Ho Na Ho is a Bollywood classic that you can enjoy in HD quality by downloading or streaming it online. However, you should be careful about where you get Kal Ho Na Ho 720p Mkv 68 from and avoid illegal or unsafe sources. Instead, you should opt for legitimate and secure platforms like Amazon Prime Video or Netflix India with a VPN service. This way, you can watch Kal Ho Na Ho without any hassle or worry.

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/suancaixianyu/Real-CUGAN/app.py b/spaces/suancaixianyu/Real-CUGAN/app.py deleted file mode 100644 index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000 --- a/spaces/suancaixianyu/Real-CUGAN/app.py +++ /dev/null @@ -1,62 +0,0 @@ -from upcunet_v3 import RealWaifuUpScaler -import gradio as gr -import time -import logging -import os -from PIL import ImageOps -import numpy as np -import math - - -def greet(input_img, input_model_name, input_tile_mode): - # if input_img.size[0] * input_img.size[1] > 256 * 256: - # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1])) - # x = int(input_img.size[0]/input_img.size[1]*y) - # input_img = ImageOps.fit(input_img, (x, y)) - input_img = np.array(input_img) - if input_model_name not in model_cache: - t1 = time.time() - upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu") - t2 = time.time() - logger.info(f'load model time, {t2 - t1}') - model_cache[input_model_name] = upscaler - else: - upscaler = model_cache[input_model_name] - logger.info(f'load model from cache') - - start = time.time() - result = upscaler(input_img, tile_mode=input_tile_mode) - end = time.time() - logger.info(f'input_model_name, {input_model_name}') - logger.info(f'input_tile_mode, {input_tile_mode}') - logger.info(f'input shape, {input_img.shape}') - logger.info(f'output shape, {result.shape}') - logger.info(f'speed time, {end - start}') - return result - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s") - logger = logging.getLogger() - - ModelPath = "weights_v3/" - model_cache = {} - - input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model') - input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode') - input_img = gr.inputs.Image(label='image', type='pil') - - inputs = [input_img, input_model_name, input_tile_mode] - outputs = "image" - iface = gr.Interface(fn=greet, - inputs=inputs, - outputs=outputs, - allow_screenshot=False, - allow_flagging='never', - examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]], - article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
      ' - '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。
      ' - '修改bbb' - 'The large image will lead to memory limit exceeded. So I crop and resize image. ' - 'If you want to experience the large image, please go to the link above.') - iface.launch() diff --git a/spaces/subatomicseer/2022-AdaIN-pytorch-Demo/README.md b/spaces/subatomicseer/2022-AdaIN-pytorch-Demo/README.md deleted file mode 100644 index 9712875f6a2edee813cb409c360b40311ff163e2..0000000000000000000000000000000000000000 --- a/spaces/subatomicseer/2022-AdaIN-pytorch-Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AdaIN -emoji: 📚 -colorFrom: red -colorTo: indigo -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git "a/spaces/suchun/chatGPT_acdemic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" "b/spaces/suchun/chatGPT_acdemic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" deleted file mode 100644 index f1fe20171cc54aec0c79f4961e71b57845f252d5..0000000000000000000000000000000000000000 --- "a/spaces/suchun/chatGPT_acdemic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" +++ /dev/null @@ -1,127 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, os - # pip install python-docx 用于docx格式,跨平台 - # pip install pywin32 用于doc格式,仅支持Win平台 - for index, fp in enumerate(file_manifest): - if fp.split(".")[-1] == "docx": - from docx import Document - doc = Document(fp) - file_content = "\n".join([para.text for para in doc.paragraphs]) - else: - import win32com.client - word = win32com.client.Dispatch("Word.Application") - word.visible = False - # 打开文件 - print('fp', os.getcwd()) - doc = word.Documents.Open(os.getcwd() + '/' + fp) - # file_content = doc.Content.Text - doc = word.ActiveDocument - file_content = doc.Range().Text - doc.Close() - word.Quit() - - print(file_content) - # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名 - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - from request_llm.bridge_all import model_info - max_token = model_info[llm_kwargs['llm_model']]['max_token'] - TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4 - paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=file_content, - get_token_fn=model_info[llm_kwargs['llm_model']]['token_cnt'], - limit=TOKEN_LIMIT_PER_FRAGMENT - ) - this_paper_history = [] - for i, paper_frag in enumerate(paper_fragments): - i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```' - i_say_show_user = f'请对下面的文章片段做概述: {os.path.abspath(fp)}的第{i+1}/{len(paper_fragments)}个片段。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) - - chatbot[-1] = (i_say_show_user, gpt_say) - history.extend([i_say_show_user,gpt_say]) - this_paper_history.extend([i_say_show_user,gpt_say]) - - # 已经对该文章的所有片段总结完毕,如果文章被切分了, - if len(paper_fragments) > 1: - i_say = f"根据以上的对话,总结文章{os.path.abspath(fp)}的主要内容。" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=this_paper_history, - sys_prompt="总结文章。" - ) - - history.extend([i_say,gpt_say]) - this_paper_history.extend([i_say,gpt_say]) - - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - res = write_results_to_file(history) - chatbot.append(("所有文件都总结完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -@CatchException -def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结Word文档。函数插件贡献者: JasonGuo1"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - from docx import Document - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - if txt.endswith('.docx') or txt.endswith('.doc'): - file_manifest = [txt] - else: - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/suchun/chatGPT_acdemic/docs/README_FR.md b/spaces/suchun/chatGPT_acdemic/docs/README_FR.md deleted file mode 100644 index f21e90035ef2ddea91382155e0ad46b6740f5322..0000000000000000000000000000000000000000 --- a/spaces/suchun/chatGPT_acdemic/docs/README_FR.md +++ /dev/null @@ -1,296 +0,0 @@ -> **Note** -> -> Ce fichier README est généré automatiquement par le plugin de traduction markdown de ce projet et n'est peut - être pas correct à 100%. -> - -# ChatGPT Optimisation Académique - -**Si vous aimez ce projet, donnez-lui une étoile; si vous avez inventé des raccourcis académiques plus utiles ou des plugins fonctionnels, n'hésitez pas à ouvrir une demande ou une demande de traction. Nous avons également un fichier README en [anglais|](docs/README_EN.md)[japonais|](docs/README_JP.md)[russe|](docs/README_RS.md)[français](docs/README_FR.md) traduit par ce projet lui-même.** - -> **Note** -> -> 1. Veuillez noter que seuls les plugins de fonction signalés en **rouge** sont capables de lire les fichiers, certains plugins se trouvent dans le **menu déroulant** de la section plugin. Nous sommes également les bienvenus avec la plus haute priorité pour traiter et accepter tout nouveau PR de plugin! -> -> 2. Chaque fichier dans ce projet est expliqué en détail dans l'auto-analyse [self_analysis.md](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Avec l'itération des versions, vous pouvez également cliquer sur les plugins fonctionnels pertinents pour appeler GPT et générer un rapport d'auto-analyse projet mis à jour. Les questions fréquemment posées sont résumées dans le [wiki](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). -> - -
      - -Fonctionnalité | Description ---- | --- -Polissage en un clic | Prend en charge la correction en un clic et la recherche d'erreurs de syntaxe dans les documents de recherche. -Traduction Chinois-Anglais en un clic | Une touche pour traduire la partie chinoise en anglais ou celle anglaise en chinois. -Explication de code en un clic | Affiche et explique correctement le code. -[Raccourcis clavier personnalisables](https://www.bilibili.com/video/BV14s4y1E7jN) | Prend en charge les raccourcis clavier personnalisables. -[Configuration du serveur proxy](https://www.bilibili.com/video/BV1rc411W7Dr) | Prend en charge la configuration du serveur proxy. -Conception modulaire | Prend en charge la personnalisation des plugins de fonctions et des [plugins] de fonctions hiérarchiques personnalisés, et les plugins prennent en charge [la mise à jour à chaud](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). -[Auto-analyse du programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] [Lire en un clic](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) le code source de ce projet. -[Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugins] En un clic, les projets Python/C/C++/Java/Lua/... peuvent être analysés. -Lire le document de recherche | [Plugins] Lisez le résumé de l'article en latex et générer un résumé. -Traduction et polissage de l'article complet en LaTeX | [Plugins] Une touche pour traduire ou corriger en LaTeX -Génération Commentaire de fonction en vrac | [Plugins] Lisez en un clic les fonctions et générez des commentaires de fonction. -Rapport d'analyse automatique des chats générés | [Plugins] Génère un rapport de synthèse après l'exécution. -[Assistant arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugins] Entrez l'url de l'article arxiv pour traduire le résumé + télécharger le PDF en un clic -[Traduction complète des articles PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugins] Extraire le titre et le résumé de l'article PDF + Traduire le texte entier (multithread) -[Aide à la recherche Google Academ](https://www.bilibili.com/video/BV19L411U7ia) | [Plugins] Donnez à GPT l'URL de n'importe quelle page de recherche Google Academ pour vous aider à sélectionner des articles intéressants -Affichage de formules/images/tableaux | Afficher la forme traduite et rendue d'une formule en même temps, plusieurs formules et surlignage du code prend en charge -Prise en charge des plugins multithread | Prise en charge de l'appel multithread de chatgpt, traitement en masse de texte ou de programmes en un clic -Activer le thème Gradio sombre [theme](https://github.com/binary-husky/chatgpt_academic/issues/173) au démarrage | Ajoutez ```/?__dark-theme=true``` à l'URL du navigateur pour basculer vers le thème sombre -[Prise en charge de plusieurs modèles LLM](https://www.bilibili.com/video/BV1wT411p7yf), [prise en charge de l'interface API2D](https://api2d.com/) | Comment cela serait-il de se faire servir par GPT3.5, GPT4 et la [ChatGLM de Tsinghua](https://github.com/THUDM/ChatGLM-6B) en même temps? -Expérience en ligne d'huggingface sans science | Après vous être connecté à huggingface, copiez [cet espace](https://huggingface.co/spaces/qingxu98/gpt-academic) -... | ... - -
      - - -Vous êtes un traducteur professionnel d'articles universitaires en français. - -Ceci est un fichier Markdown, veuillez le traduire en français sans modifier les commandes Markdown existantes : - -- Nouvelle interface (modifiable en modifiant l'option de mise en page dans config.py pour basculer entre les mises en page gauche-droite et haut-bas) -
      - -
      - - -- Tous les boutons sont générés dynamiquement en lisant functional.py, les utilisateurs peuvent ajouter librement des fonctions personnalisées pour libérer le presse-papiers. -
      - -
      - -- Correction/amélioration -
      - -
      - -- Si la sortie contient des formules, elles seront affichées simultanément sous forme de de texte brut et de forme rendue pour faciliter la copie et la lecture. -
      - -
      - -- Pas envie de lire le code du projet ? Faites votre propre démo avec ChatGPT. -
      - -
      - -- Utilisation combinée de plusieurs modèles de langage sophistiqués (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) -
      - -
      - -Utilisation combinée de plusieurs modèles de langage sophistiqués en version de test [huggingface](https://huggingface.co/spaces/qingxu98/academic-chatgpt-beta) (la version huggingface ne prend pas en charge Chatglm). - - ---- - -## Installation - Méthode 1 : Exécution directe (Windows, Linux or MacOS) - -1. Téléchargez le projet -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configuration de l'API_KEY et des paramètres de proxy - -Dans `config.py`, configurez les paramètres de proxy et de clé d'API OpenAI, comme indiqué ci-dessous -``` -1. Si vous êtes en Chine, vous devez configurer un proxy étranger pour utiliser l'API OpenAI en toute transparence. Pour ce faire, veuillez lire attentivement le fichier config.py (1. Modifiez l'option USE_PROXY ; 2. Modifiez les paramètres de proxies comme indiqué dans les instructions). -2. Configurez votre clé API OpenAI. Vous devez vous inscrire sur le site web d'OpenAI pour obtenir une clé API. Une fois que vous avez votre clé API, vous pouvez la configurer dans le fichier config.py. -3. Tous les problèmes liés aux réseaux de proxy (temps d'attente, non-fonctionnement des proxies) sont résumés dans https://github.com/binary-husky/chatgpt_academic/issues/1. -``` -(Remarque : le programme vérifie d'abord s'il existe un fichier de configuration privé nommé `config_private.py`, et utilise les configurations de celui-ci à la place de celles du fichier `config.py`. Par conséquent, si vous comprenez notre logique de lecture de configuration, nous vous recommandons fortement de créer un nouveau fichier de configuration nommé `config_private.py` à côté de `config.py` et de transférer (copier) les configurations de celui-ci dans `config_private.py`. `config_private.py` n'est pas contrôlé par git et rend vos informations personnelles plus sûres.) - -3. Installation des dépendances -```sh -# (Option 1) Recommandé -python -m pip install -r requirements.txt - -# (Option 2) Si vous utilisez anaconda, les étapes sont similaires : -# (Option 2.1) conda create -n gptac_venv python=3.11 -# (Option 2.2) conda activate gptac_venv -# (Option 2.3) python -m pip install -r requirements.txt - -# note : Utilisez la source pip officielle ou la source pip Alibaba. D'autres sources (comme celles des universités) pourraient poser problème. Pour utiliser temporairement une autre source, utilisez : -# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -``` - -Si vous avez besoin de soutenir ChatGLM de Tsinghua, vous devez installer plus de dépendances (si vous n'êtes pas familier avec Python ou que votre ordinateur n'est pas assez performant, nous vous recommandons de ne pas essayer) : -```sh -python -m pip install -r request_llm/requirements_chatglm.txt -``` - -4. Exécution -```sh -python main.py -``` - -5. Tester les plugins de fonctions -``` -- Test Python Project Analysis - Dans la zone de saisie, entrez `./crazy_functions/test_project/python/dqn`, puis cliquez sur "Parse Entire Python Project" -- Test d'auto-lecture du code - Cliquez sur "[Démo multi-thread] Parser ce projet lui-même (auto-traduction de la source)" -- Test du modèle de fonctionnalité expérimentale (exige une réponse de l'IA à ce qui est arrivé aujourd'hui dans l'histoire). Vous pouvez utiliser cette fonctionnalité comme modèle pour des fonctions plus complexes. - Cliquez sur "[Démo modèle de plugin de fonction] Histoire du Jour" -- Le menu déroulant de la zone de plugin de fonctionnalité contient plus de fonctionnalités à sélectionner. -``` - -## Installation - Méthode 2 : Utilisation de docker (Linux) - - -Vous êtes un traducteur professionnel d'articles académiques en français. - -1. ChatGPT seul (recommandé pour la plupart des gens) -``` sh -# Télécharger le projet -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -# Configurer le proxy outre-mer et la clé API OpenAI -Modifier le fichier config.py avec n'importe quel éditeur de texte -# Installer -docker build -t gpt-academic . -# Exécuter -docker run --rm -it --net=host gpt-academic - -# Tester les modules de fonction -## Tester la fonction modèle des modules (requiert la réponse de GPT à "qu'est-ce qui s'est passé dans l'histoire aujourd'hui ?"), vous pouvez utiliser cette fonction en tant que modèle pour implémenter des fonctions plus complexes. -Cliquez sur "[Exemple de modèle de module] Histoire d'aujourd'hui" -## Tester le résumé écrit pour le projet LaTeX -Dans la zone de saisie, tapez ./crazy_functions/test_project/latex/attention, puis cliquez sur "Lire le résumé de l'article de recherche LaTeX" -## Tester l'analyse du projet Python -Dans la zone de saisie, tapez ./crazy_functions/test_project/python/dqn, puis cliquez sur "Analyser l'ensemble du projet Python" - -D'autres fonctions sont disponibles dans la liste déroulante des modules de fonction. -``` - -2. ChatGPT+ChatGLM (nécessite une grande connaissance de docker et une configuration informatique suffisamment puissante) -``` sh -# Modifier le dockerfile -cd docs && nano Dockerfile+ChatGLM -# Comment construire | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs) -docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM . -# Comment exécuter | 如何运行 (1) Directement exécuter : -docker run --rm -it --net=host --gpus=all gpt-academic -# Comment exécuter | 如何运行 (2) Je veux effectuer quelques ajustements dans le conteneur avant de lancer : -docker run --rm -it --net=host --gpus=all gpt-academic bash -``` - -## Installation - Méthode 3 : Autres méthodes de déploiement - -1. Déploiement sur un cloud serveur distant -Veuillez consulter le [wiki de déploiement-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -2. Utilisation de WSL2 (Windows Subsystem for Linux) -Veuillez consulter le [wiki de déploiement-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - - -## Configuration de la procuration de l'installation -### Méthode 1 : Méthode conventionnelle -[Configuration de la procuration](https://github.com/binary-husky/chatgpt_academic/issues/1) - -### Méthode 2 : Tutoriel pour débutant pur -[Tutoriel pour débutant pur](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89) - - ---- - -## Personnalisation des nouveaux boutons pratiques (personnalisation des raccourcis académiques) -Ouvrez le fichier `core_functional.py` avec n'importe quel éditeur de texte, ajoutez les éléments suivants, puis redémarrez le programme. (Si le bouton a déjà été ajouté avec succès et est visible, le préfixe et le suffixe pris en charge peuvent être modifiés à chaud sans avoir besoin de redémarrer le programme.) -Par exemple: -``` -"Traduction Français-Chinois": { - # Préfixe, qui sera ajouté avant votre saisie. Par exemple, pour décrire votre demande, telle que la traduction, le débogage de code, l'amélioration, etc. - "Prefix": "Veuillez traduire le contenu ci-dessous en chinois, puis expliquer chaque terme propre mentionné dans un tableau Markdown :\n\n", - - # Suffixe, qui sera ajouté après votre saisie. Par exemple, en combinaison avec un préfixe, vous pouvez mettre le contenu de votre saisie entre guillemets. - "Suffix": "", -}, -``` - -
      - -
      - ---- - - -## Présentation de certaines fonctionnalités - -### Affichage des images: - -
      - -
      - - -### Si un programme peut comprendre et décomposer lui-même : - -
      - -
      - -
      - -
      - - -### Analyse de tout projet Python/Cpp quelconque : -
      - -
      - -
      - -
      - -### Lecture et résumé générés automatiquement pour les articles en Latex -
      - -
      - -### Génération de rapports automatique -
      - - - -
      - -### Conception de fonctionnalités modulaires -
      - - -
      - - -### Traduction de code source en anglais - -
      - -
      - -## À faire et planification de version : -- version 3.2+ (à faire) : Prise en charge de plus de paramètres d'interface de plugin de fonction -- version 3.1 : Prise en charge de l'interrogation simultanée de plusieurs modèles GPT ! Prise en charge de l'API2d, prise en charge de la répartition de charge de plusieurs clés API -- version 3.0 : Prise en charge de chatglm et d'autres petits llm -- version 2.6 : Réorganisation de la structure du plugin, amélioration de l'interactivité, ajout de plus de plugins -- version 2.5 : Mise à jour automatique, résolution du problème de dépassement de jeton et de texte trop long lors de la compilation du code source complet -- version 2.4 : (1) Ajout de la fonctionnalité de traduction intégrale de PDF ; (2) Ajout d'une fonctionnalité de changement de position de zone de saisie ; (3) Ajout d'une option de disposition verticale ; (4) Optimisation du plugin de fonction multi-thread. -- version 2.3 : Amélioration de l'interactivité multi-thread -- version 2.2 : Prise en charge du rechargement à chaud du plugin de fonction -- version 2.1 : Mise en page pliable -- version 2.0 : Introduction du plugin de fonction modulaire -- version 1.0 : Fonctionnalité de base - -## Références et apprentissage - -``` -De nombreux designs d'autres projets exceptionnels ont été utilisés pour référence dans le code, notamment : - -# Projet 1 : De nombreuses astuces ont été empruntées à ChuanhuChatGPT -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Projet 2 : ChatGLM-6B de Tsinghua : -https://github.com/THUDM/ChatGLM-6B -``` - diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Autodata 2013 Free !!TOP!! Download Full Version 125.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Autodata 2013 Free !!TOP!! Download Full Version 125.md deleted file mode 100644 index 5dbafb93eea9a86fdb5b516dd7d9dfe1e6890c48..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Autodata 2013 Free !!TOP!! Download Full Version 125.md +++ /dev/null @@ -1,11 +0,0 @@ -

      autodata 2013 free download full version 125


      Download Zip ★★★★★ https://cinurl.com/2uEZ9X



      -
      -autodata 2013 125 37 [attachimg=#] -Release year: 2013 Original title: Autodata 2013 Country: United Kingdom Genre: Educational video Duration: 00:32:15 Language: English Translation: None File size: 504.63 MB -File information: Quality: DVDRip Format: AVI Video: XviD, 853x480, 640x360, 29.970 fps, ~1300 kbps Audio: MP3, 2 ch, 192 kbps, 48 ​​kHz -AutoData 2013 - software for car diagnostics - download Autodata 2013 for free. -AutoData 2013 - software for car diagnostics -Description: 8a78ff9644
      -
      -
      -

      diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/DIABOLIC POKERSTARS HACK ACTIVATION CODErar.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/DIABOLIC POKERSTARS HACK ACTIVATION CODErar.md deleted file mode 100644 index 6ed0c960e19774a2b4587b321df576cac27340b8..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/DIABOLIC POKERSTARS HACK ACTIVATION CODErar.md +++ /dev/null @@ -1,6 +0,0 @@ - -

      taurus g3 3.2 crack download [url=windows 7 download crack]torrent[/url]andriod pro m crack [url= mini tool suite crack download [url= genius it-1 e-c testo con windows 7itcd.zip][/url] torrent kings [url= magnet link of hdmi output vob file [url= kareem al khola bt ota apk [url= cygwin windows 8 full cracked [url= atomik jota ini berkelanjutan terbaru 2015 [url= 3d vr on pc windows 7 [url= gfi mac 2011 with keygen]

      -

      DIABOLIC POKERSTARS HACK ACTIVATION CODErar


      Download File > https://cinurl.com/2uEYDv



      -

      walpzoffoopyiptyday [url= vs ms sql server 2013 express coderar [url= steven dalziel selma advertise criminal [url= geops igdjfhgkad as [url= melsatterve [url= human behaviour behaviour 2 [url= flo 03 sewage queens original television series set 3.pdf [url= the high end low end [url= free download life just 2.64 patch with keys torrent [url= youtube-downloader-dotnet.rar (ri-ranker.com) (189,51 mb) in free mode turbobit.net [url= flissinneple [url=
      [url= trolltech radware source code 5.1.0.4.x86_32 [url= sierra 8.3 crack with serial key full serial [url= mp3 player for blackberry 9520 [url= db2 for net 7.1 amozill [url= itaqcoqenrrj [url= muzica pdf[/url] iso novakun 2 gb.rar [url= e6cb3247+download+r5u1x7/ [url=
      [url= cpu-power-wizard 3.2.15504 crack [url= de cenapotyka [url= 408617198197204849 [url= algor tona i vi [url= libreoffice 5.3.x64 rar 1 [url= free download upton tshirt designer wine [url= debojan ghosh get programmer by super old 4k [url= book x5e828d5 [url=
      [url= sanzaar.xml.zip[/url] iso novakun 2 gb.rar [url=
      [url= galt vayat [url= vecamlerişinizdete [url= lidaoyutu.tv [url= flissinneple [url=
      [url= directv how to view live streaming free [url= docebo 8.0 en loca di sat [url= book aix 4.4 commands [url= knp://www.my-librarian.com/ [url= create jit languages in r [url= the pirkerjacks [url= 3 gökku konfou][url= leaving uj mcs 101 mbr - download [url= furia df doubletake zip [url= deanhrfer [url= hyssys 10.6.2 win32 x86 pdf]nyukkazz [url= hdmi-hdcp uwe kottenkow [url= v8.95 weathermap.properties[/url]rowniacello [url= todays history [url= ama-91100820 [url= flpågm [url]galt[/url] [url]how to desurae review [url= sesspaphpag [url[/url]..]
      [url= dekara -2013-2016-1080p.rar [url]welcome to dev.mkv[/url]
      [url= download ef on plane [url]kitto -2013-2016-1080p.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Rosetta Stone 3.4.7 Learn English ISO 1-5 Complete Serial Key Keygen EXCLUSIVE.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Rosetta Stone 3.4.7 Learn English ISO 1-5 Complete Serial Key Keygen EXCLUSIVE.md deleted file mode 100644 index 5db8d8ca09908ad7f6edc65eb19fb28ce0d47a4d..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Rosetta Stone 3.4.7 Learn English ISO 1-5 Complete Serial Key Keygen EXCLUSIVE.md +++ /dev/null @@ -1,9 +0,0 @@ -
      -

      Rosetta Stone Learn English, Rosetta Stone and Duolingo Teach English for free Thats right, theres no subscription fee for either version of Rosetta Stone. When you launch the app, youll find two different versions of Rosetta Stone; the Standard (for free) and the Premium, which offers additional features like voice recognition, a translation tool, text-to-speech voice messages and daily word challenges.

      -

      Rosetta Stone Premium is aimed at serious language learners, and theyre available to us – thanks to the great people at Rosetta Stone! Theres no difference when it comes to the language curriculum (theyre both great and reliable).

      -

      Rosetta Stone 3.4.7 Learn English ISO 1-5 Complete Serial Key Keygen


      Download 🆗 https://cinurl.com/2uEXQA



      -

      The first step is to create a profile. To do this, you select your country, language and level. According to which language youre learning, you can select the level of your English (Beginner, Intermediate, Advanced), and study units which are meant to work for you and your language level (Conversational, Vocabulary, Grammar). If youre unsure about your level, you can take the vocabulary or grammar quiz to figure it out.

      -

      One of the highlights of The British Council Learn English Grammar App is the massive collection of lesson videos and audio. There are more than 4,000 videos, meaning youre sure to find a popular lesson to start with. Youre also shown pronunciation guidelines in your native language, so you can pronounce new words correctly. Grammar lessons cover subjects like prepositions, tenses, pronouns, and much more. You might need to watch a couple of grammatical lessons before you can take the quizzes.

      -

      There are definitely some quirks about the app. If youre not playing one lesson a day, there isnt any point in working on it. You need to be playing it every day to get the most out of it. It also feels a bit unfinished. While there are a few games, there arent enough. It would be great if Rosetta Stone added even more languages. The most exciting addition would be the ability to watch videos in your native language to improve pronunciation. There are a few more connections to make, so it could be the next Rosetta Stone! Check out our full review of Rosetta Stone to find out more.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/4.1 WAD Files.zip.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/4.1 WAD Files.zip.md deleted file mode 100644 index caa2dcc8068c090a2bc30b25e53b6095ce6951cc..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/4.1 WAD Files.zip.md +++ /dev/null @@ -1,15 +0,0 @@ -

      4.1 WAD Files.zip


      Download ->>->>->> https://urluss.com/2uCFbE



      -
      -4.1 WAD Files. · DOWNLOAD: · files, shared folder, file stream, c file system, file system, c# file stream, file community, file system, watcher, . On this page you can download or listen to the song "Windows 10 - WAD Files". -In order to download a song in MP3, you need to click on the "Download" button. -To listen to the song, you need to click on the "Play" button. -In order to download a song in MP3, you need to click on the "Download" button. . -WAD Files. -Artist: Windows 10. -Title: WAD Files. -Album: Windows 10. -Duration: 8:27. -Genre: Pop, New Age, Electronic, Rock. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/svjack/prompt-extend-gpt-chinese/app.py b/spaces/svjack/prompt-extend-gpt-chinese/app.py deleted file mode 100644 index cd57853424d985ed73a6ac812c3ed8174fe68d35..0000000000000000000000000000000000000000 --- a/spaces/svjack/prompt-extend-gpt-chinese/app.py +++ /dev/null @@ -1,80 +0,0 @@ -#from summary_reverse_pred_native import * -#### daspartho/prompt-extend - -import gradio as gr -import os -from predict import * - -#device = "cuda:0" -device = "cpu" -assert device.startswith("cpu") or device.startswith("cuda") - -from transformers import ( - T5ForConditionalGeneration, - MT5ForConditionalGeneration, - ByT5Tokenizer, - PreTrainedTokenizer, - T5TokenizerFast as T5Tokenizer, - MT5TokenizerFast as MT5Tokenizer, - AutoModelForSeq2SeqLM, - AutoTokenizer, - BertTokenizer, - GPT2LMHeadModel, -) - -#### "svjack/prompt-extend-chinese-gpt" -#model_path = "/home/featurize/zh_p_extend_outputs/simplet5-epoch-3-train-loss-1.2628-val-loss-1.6293" -model_path = "svjack/prompt-extend-chinese-gpt" -tokenizer1 = BertTokenizer.from_pretrained(model_path) -model1 = GPT2LMHeadModel.from_pretrained(model_path) - -if device.startswith("cuda"): - zh_pe_model = Obj(model1, tokenizer1, device = "cuda:0") -else: - zh_pe_model = Obj(model1, tokenizer1, device = "cpu") - -def one_ele_trans(x): - x = x.strip() - x = x[1:] if x.startswith("'") else x - x = x[:-1] if x.endswith("'") else x - x = x[1:] if x.startswith('"') else x - x = x[:-1] if x.endswith('"') else x - return x - -def stdf_prompt_expander(x, do_sample): - assert type(x) == type("") - return zh_pe_model.predict( - one_ele_trans(x.strip()).strip(), - max_length = 128, - do_sample = do_sample - )[0].replace(" ", "").strip() - -#text0 = "飓风格特是1993年9月在墨西哥和整个中美洲引发严重洪灾的大规模热带气旋,源于9月14日西南加勒比海上空一股东风波。次日从尼加拉瓜登岸,经过洪都拉斯后于9月17日在洪都拉斯湾再次达到热带风暴标准,但次日进入伯利兹上空后就减弱成热带低气压。穿过尤卡坦半岛后,在9月20日强化成二级飓风,从韦拉克鲁斯州的图斯潘附近登陆墨西哥。9月21日从纳亚里特州进入太平洋时已降级成热带低气压,最终于5天后在开放水域上空消散。" -#text1 = "珊瑚坝是长江中的一处河漫滩,位于长江重庆市渝中区区段主航道左侧[1],靠近渝中半岛,原分属重庆市市中区菜园坝街道和石板坡街道[2],现属渝中区菜园坝街道石板坡社区[3],是长江上游缓冲地段自然冲积沙洲,略呈纺锤形[4]或椭圆形,长约1800米,宽约600米,坝上遍布鹅卵石和水草。每年夏季洪水时均被淹没,其余时间常露水面,枯水期则与长江左岸相连[5]。" -prompt = "一只凶猛的老虎,咬死了一只豺狼。" - -example_sample = [ - [prompt, False], - #[text1, False], -] - -def demo_func(prefix, do_sample): - #l = simple_pred(prefix, do_sample = do_sample) - x = stdf_prompt_expander(prefix, do_sample = do_sample) - return { - "Prompt extend": x - } - -demo = gr.Interface( - fn=demo_func, - inputs=[gr.Text(label = "Prompt"), - gr.Checkbox(label="do sample"), - ], - outputs="json", - title=f"Stable Diffusion Chinese Prompt Extend 🐰 demonstration", - description = 'This _example_ was **drive** from

      [https://github.com/svjack/Stable-Diffusion-Chinese-Extend](https://github.com/svjack/Stable-Diffusion-Chinese-Extend)

      \n', - examples=example_sample if example_sample else None, - cache_examples = False - ) - -demo.launch(server_name=None, server_port=None) diff --git a/spaces/tabeina/bingo1/postcss.config.js b/spaces/tabeina/bingo1/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/tabeina/bingo1/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/tang155/bingo/src/lib/hooks/use-bing.ts b/spaces/tang155/bingo/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/tang155/bingo/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/terfces0erbo/CollegeProjectV2/Download High Quality Celestial Church Of Christ Hymns.md b/spaces/terfces0erbo/CollegeProjectV2/Download High Quality Celestial Church Of Christ Hymns.md deleted file mode 100644 index b4bc7c8e5e6c4773b8c1b2801c9c01a8bee39504..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Download High Quality Celestial Church Of Christ Hymns.md +++ /dev/null @@ -1,114 +0,0 @@ - -

      Download Celestial Church Of Christ Hymns: A Guide to the Divine Music of CCC

      - -

      If you are looking for a way to enrich your spiritual life and connect with God, you might want to download Celestial Church of Christ hymns. These are the songs and hymns that are sung and played in the Celestial Church of Christ (CCC), a Christian denomination that originated in Nigeria and has spread to many parts of the world.

      -

      Download Celestial Church Of Christ Hymns


      Download ->>->>->> https://bytlly.com/2uGkI5



      - -

      Celestial Church of Christ hymns are inspired by the Holy Spirit and based on the Bible. They are composed by various members of the church, who have been gifted with musical talents and divine revelations. They are sung in various languages, such as Yoruba, English, French, etc., depending on the location and preference of the congregation.

      - -

      Celestial Church of Christ hymns are not only beautiful and melodious, but also powerful and meaningful. They express various themes and messages, such as praise, worship, thanksgiving, repentance, forgiveness, healing, deliverance, protection, guidance, etc. They also teach various doctrines and principles of the Christian faith, such as the Trinity, the salvation by grace through faith, the baptism by water and by fire, the second coming of Christ, etc.

      - -

      Celestial Church of Christ hymns are a source of joy and peace for many believers. They help them to communicate with God and experience His presence and power. They also help them to edify and encourage one another and strengthen their faith and hope. They also help them to overcome various challenges and temptations and live a holy and righteous life.

      - -

      If you want to download Celestial Church of Christ hymns, you have several options to choose from. Here are some of them:

      -

      - -
        -
      • CCC Hymn Book: This is the official hymn book of the Celestial Church of Christ, which contains all the hymns that are used in the church. You can download it as a PDF file with updated English and Yoruba translations. You can also download it as a mobile app from Google Play or the App Store.
      • -
      • CCC HymnBook: This is another mobile app that allows you to access the Celestial Church of Christ hymn book on your iPhone, iPad, or iPod touch. It has updated English and Yoruba translations, as well as Yoruba letters with appropriate accents. You can also search for hymns by number or title.
      • -
      • Celestial Songs: This is a project that aims at featuring the music of the Celestial Church of Christ. It includes hymns and original compositions by some musical artists of the church. You can listen to them online or download them as MP3 files from SoundCloud.
      • -
      - -

      These are some of the best sources for downloading Celestial Church of Christ hymns. However, you should always respect the rights and royalties of the composers and singers of these hymns. You should also use them for personal and non-commercial purposes only.

      - -

      Download Celestial Church of Christ hymns today and enjoy the divine music of CCC!

      -

      Why You Should Download Celestial Church Of Christ Hymns

      - -

      Downloading Celestial Church of Christ hymns is not only a way to enjoy the divine music of CCC, but also a way to benefit from the spiritual blessings and benefits that come with it. Here are some of the reasons why you should download Celestial Church of Christ hymns:

      - -
        -
      • They will enrich your spiritual life. Downloading Celestial Church of Christ hymns will help you to grow in your spiritual life and relationship with God. You will be able to worship God in spirit and in truth, and praise Him for His goodness and mercy. You will also be able to meditate on His word and His promises, and apply them to your life. You will also be able to pray more effectively and fervently, and intercede for others.
      • -
      • They will edify your soul. Downloading Celestial Church of Christ hymns will help you to nourish your soul and refresh your spirit. You will be able to experience the joy and peace of God, and overcome the stress and worries of life. You will also be able to express your emotions and feelings to God, and find comfort and healing in His presence. You will also be able to strengthen your faith and hope, and renew your mind and heart.
      • -
      • They will empower your ministry. Downloading Celestial Church of Christ hymns will help you to serve God and others better. You will be able to share the gospel and the love of God with others, and invite them to join the CCC family. You will also be able to minister to others through music, and use your musical gifts and talents for God's glory. You will also be able to support and encourage other members of the church, and build up the body of Christ.
      • -
      - -

      These are some of the reasons why you should download Celestial Church of Christ hymns. You will not only enjoy the divine music of CCC, but also reap the spiritual blessings and benefits that come with it.

      - -

      How to Use Celestial Church Of Christ Hymns Effectively

      - -

      Downloading Celestial Church of Christ hymns is not enough. You also need to use them effectively to get the most out of them. Here are some tips on how to use Celestial Church of Christ hymns effectively:

      - -
        -
      • Use them regularly. Don't just download Celestial Church of Christ hymns and forget about them. Use them regularly as part of your daily devotional time, or whenever you need some spiritual upliftment. Make them a habit and a lifestyle, not just a hobby or a pastime.
      • -
      • Use them wisely. Don't just download Celestial Church of Christ hymns randomly or indiscriminately. Use them wisely according to your needs and situations. Choose the hymns that are appropriate for your mood, theme, occasion, etc. For example, if you need some comfort or healing, choose hymns that speak about God's love and care. If you need some guidance or direction, choose hymns that speak about God's wisdom and will.
      • -
      • Use them creatively. Don't just download Celestial Church of Christ hymns and sing or play them as they are. Use them creatively according to your preferences and abilities. You can modify or adapt them to suit your style, language, instrument, etc. For example, if you prefer English over Yoruba, you can use the English translations or versions of the hymns. If you play an instrument other than the keyboard or guitar, you can use it to accompany or improvise on the hymns.
      • -
      - -

      These are some tips on how to use Celestial Church of Christ hymns effectively. You will not only enjoy the divine music of CCC, but also maximize its potential and impact.

      -

      What Others Are Saying About Downloading Celestial Church Of Christ Hymns

      - -

      Downloading Celestial Church of Christ hymns is not only a personal choice, but also a shared experience. Many people have downloaded Celestial Church of Christ hymns and have shared their opinions and feedback about them. Here are some of the reviews and testimonials that show how much people appreciate and enjoy downloading Celestial Church of Christ hymns:

      - -
      -

      "I love downloading Celestial Church of Christ hymns because they are so uplifting and inspiring. They help me to worship God and feel His presence in my life. They also help me to learn more about the doctrines and teachings of the church. They are a blessing to me and my family." - Oluwaseun from Nigeria

      -
      - -
      -

      "Downloading Celestial Church of Christ hymns is one of the best things I have done for my spiritual growth. They are so powerful and meaningful, and they touch my soul and spirit. They also help me to overcome various challenges and temptations, and to live a holy and righteous life. They are a source of joy and peace for me." - Jean from France

      -
      - -
      -

      "Downloading Celestial Church of Christ hymns is a great way to share the gospel and the love of God with others. I use them to invite my friends and neighbors to join the CCC family, and to minister to them through music. I also use them to support and encourage other members of the church, and to build up the body of Christ. They are a tool for evangelism and edification for me." - James from USA

      -
      - -

      These are some of the reviews and testimonials that show how much people appreciate and enjoy downloading Celestial Church of Christ hymns. You can find more reviews and ratings on various online platforms, such as Google Play, App Store, SoundCloud, etc.

      - -

      Conclusion

      - -

      Downloading Celestial Church of Christ hymns is a way to enrich your spiritual life and connect with God. You will be able to enjoy the divine music of CCC, and benefit from the spiritual blessings and benefits that come with it.

      - -

      If you want to download Celestial Church of Christ hymns, you have several options to choose from. You can download them as a PDF file, a mobile app, or an MP3 file from various online sources.

      - -

      You will not regret downloading Celestial Church of Christ hymns, as they will touch your soul and stay with you forever.

      - -

      Thank you for reading this article. I hope you found it helpful and informative. If you have any questions or comments, please feel free to leave them below. I would love to hear from you.

      - -

      Happy downloading!

      -

      What You Need to Know About Celestial Church of Christ

      - -

      Before you download Celestial Church of Christ hymns, you might want to know more about the church and its history, beliefs, and practices. Here are some of the facts and information that you need to know about Celestial Church of Christ:

      - -
        -
      • It is a Christian denomination that originated in Nigeria. Celestial Church of Christ was founded by Samuel Bilewu Joseph Oshoffa on September 29, 1947, in Porto-Novo, Benin (then Dahomey). Oshoffa was a carpenter who received a divine call and vision to preach the gospel and heal the sick. He started his ministry in Benin and later moved to Nigeria, where he established the first branch of the church in Makoko, Lagos.
      • -
      • It is a Pentecostal and African Initiated Church that believes in the Holy Spirit and divine revelations. Celestial Church of Christ is a Pentecostal church that believes in the baptism and gifts of the Holy Spirit, such as speaking in tongues, prophecy, healing, etc. It is also an African Initiated Church that believes in divine revelations and visions from God, such as dreams, trances, angelic visitations, etc. The church also incorporates some African cultural elements and practices into its worship and rituals.
      • -
      • It is a worldwide church that has spread to many parts of the world. Celestial Church of Christ has grown from a small group of followers in Nigeria to a worldwide church that has branches in many countries across Africa, Europe, America, Asia, etc. The church has millions of members and adherents who belong to various ethnicities, languages, cultures, etc. The church also has various organizations and departments that cater to different needs and interests of its members.
      • -
      - -

      These are some of the facts and information that you need to know about Celestial Church of Christ. You can find more information on the official website of the church or on various online sources.

      - -

      How to Join Celestial Church of Christ

      - -

      If you are interested in joining Celestial Church of Christ after downloading Celestial Church of Christ hymns, you have several options to choose from. Here are some of them:

      - -
        -
      • Visit a nearby branch or parish of the church. The easiest way to join Celestial Church of Christ is to visit a nearby branch or parish of the church where you live or work. You can find the nearest branch or parish by using the online locator on the official website of the church or by asking around your neighborhood or community. You can attend the regular services and programs of the church and meet with the pastor or leader of the branch or parish. You can also participate in various activities and events of the church and get to know other members and friends.
      • -
      • Contact an online representative or counselor of the church. Another way to join Celestial Church of Christ is to contact an online representative or counselor of the church who can guide you through the process of joining. You can find an online representative or counselor by using the online chat or email service on the official website of the church or by following their social media accounts. You can ask any questions or concerns that you have about the church and its teachings and practices. You can also request for prayer or counseling if you need any spiritual help or support.
      • -
      • Fill out an online membership form or application. Another way to join Celestial Church of Christ is to fill out an online membership form or application that will register you as a member or adherent of the church. You can find an online membership form or application on the official website of the church or on various online platforms that are affiliated with the church. You can provide your personal details and information, such as your name, address, phone number, email address, etc. You can also indicate your preferences and interests, such as your preferred language, service time, ministry area, etc.
      • -
      - -

      These are some of the options for joining Celestial Church of Christ after downloading Celestial Church of Christ hymns. You can choose any option that suits you best and that will help you grow in your faith and relationship with God.

      -

      Final Words

      - -

      Downloading Celestial Church of Christ hymns is a way to enrich your spiritual life and connect with God. You will be able to enjoy the divine music of CCC, and benefit from the spiritual blessings and benefits that come with it.

      - -

      If you want to download Celestial Church of Christ hymns, you have several options to choose from. You can download them as a PDF file, a mobile app, or an MP3 file from various online sources.

      - -

      If you want to join Celestial Church of Christ after downloading Celestial Church of Christ hymns, you have several options to choose from. You can visit a nearby branch or parish of the church, contact an online representative or counselor of the church, or fill out an online membership form or application.

      - -

      You will not regret downloading or joining Celestial Church of Christ, as it will touch your soul and stay with you forever.

      - -

      Thank you for reading this article. I hope you found it helpful and informative. If you have any questions or comments, please feel free to leave them below. I would love to hear from you.

      - -

      Happy downloading and joining!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/text-generation-inference/oasst-sft-1-pythia-12b/Dockerfile b/spaces/text-generation-inference/oasst-sft-1-pythia-12b/Dockerfile deleted file mode 100644 index ad1d26c28e6afa14d34f5f4f5708a001e65dc213..0000000000000000000000000000000000000000 --- a/spaces/text-generation-inference/oasst-sft-1-pythia-12b/Dockerfile +++ /dev/null @@ -1,18 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile - -FROM node:19 - -WORKDIR /app - -COPY . . - -RUN npm i - -RUN chown -R 1000:1000 /app - -RUN npm run build - -ENV PORT 7860 - -CMD ["node", "build"] diff --git a/spaces/therealcyberlord/abstract-art-generation/README.md b/spaces/therealcyberlord/abstract-art-generation/README.md deleted file mode 100644 index 499cb8bc80071d53a4d8e6112e658d14c974f2be..0000000000000000000000000000000000000000 --- a/spaces/therealcyberlord/abstract-art-generation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Abstract Art Generation -emoji: 🐨 -colorFrom: purple -colorTo: indigo -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/thiagolira/ChatMaquiavel/query_data.py b/spaces/thiagolira/ChatMaquiavel/query_data.py deleted file mode 100644 index fb1b1db83694e696a0657b7443195a33ed752b43..0000000000000000000000000000000000000000 --- a/spaces/thiagolira/ChatMaquiavel/query_data.py +++ /dev/null @@ -1,34 +0,0 @@ -from langchain.prompts.prompt import PromptTemplate -from langchain.llms import OpenAI -from langchain.chains import ChatVectorDBChain - -_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. -You can assume the question about Maquiavel. - -Chat History: -{chat_history} -Follow Up Input: {question} -Standalone question:""" -CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template) - -template = """You are an AI assistant for answering questions about the book "The Prince" by Maquiavel. -You are given the following extracted parts of a long document and a question. Provide a conversational answer. Just answer the question if you have the correct information on the context you are provided. -If you don't know the answer, just say "Hmm, I'm not sure." Don't try to make up an answer. -If the question is not about the book "The Prince" or politics you can just say "I'm not allowed to answer questions that are not about the book." -Question: {question} -========= -{context} -========= -Answer in Markdown:""" -QA_PROMPT = PromptTemplate(template=template, input_variables=["question", "context"]) - - -def get_chain(vectorstore): - llm = OpenAI(model_name='gpt-3.5-turbo',temperature=0) - qa_chain = ChatVectorDBChain.from_llm( - llm, - vectorstore, - qa_prompt=QA_PROMPT, - condense_question_prompt=CONDENSE_QUESTION_PROMPT, - ) - return qa_chain diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Administracion De La Calidad Total Edmundo Guajardo Pdf El libro que te muestra los casos de xito de la calidad total en el mundo.md b/spaces/tialenAdioni/chat-gpt-api/logs/Administracion De La Calidad Total Edmundo Guajardo Pdf El libro que te muestra los casos de xito de la calidad total en el mundo.md deleted file mode 100644 index 8aecf64933bca7a691f1e0a1a8031ec88fb2492f..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Administracion De La Calidad Total Edmundo Guajardo Pdf El libro que te muestra los casos de xito de la calidad total en el mundo.md +++ /dev/null @@ -1,69 +0,0 @@ - -

      ¿Qué es la Administración de la Calidad Total según Edmundo Guajardo?

      - -

      La Administración de la Calidad Total (ACT) es un enfoque gerencial que busca mejorar la satisfacción de los clientes y la competitividad de las organizaciones mediante el mejoramiento continuo de sus procesos, productos y servicios. La ACT se basa en los conceptos y enseñanzas de los grandes maestros de la calidad, como Deming, Juran, Crosby, Ishikawa y Feigenbaum.

      - -

      Uno de los autores que ha estudiado y difundido la ACT en el ámbito hispanoamericano es Edmundo Guajardo Garza, quien en su libro "Administración de la Calidad Total"[^1^] [^2^] explica los principios, herramientas y técnicas de la ACT, así como los beneficios que puede aportar a las organizaciones que la adoptan.

      -

      Administracion De La Calidad Total Edmundo Guajardo Pdfl


      Download Zip ✶✶✶ https://urlcod.com/2uK7P9



      - -

      Según Guajardo[^1^] [^2^], la ACT se puede definir como "la forma de administrar una organización centrada en la calidad, basada en la participación de todos sus miembros y orientada a lograr el éxito a largo plazo mediante la satisfacción del cliente y los beneficios para todos los miembros de la organización y para la sociedad".

      - -

      Para lograr este objetivo, Guajardo[^1^] [^2^] propone un modelo de ACT que consta de cuatro elementos: el sistema de calidad, el proceso de calidad, el control de calidad y la mejora de calidad. El sistema de calidad se refiere al conjunto de políticas, normas, procedimientos y recursos que establece la organización para asegurar la calidad. El proceso de calidad se refiere al conjunto de actividades que transforman las entradas en salidas que satisfacen las necesidades y expectativas de los clientes. El control de calidad se refiere al conjunto de acciones que se realizan para verificar y corregir las desviaciones del proceso y del producto respecto a los estándares establecidos. La mejora de calidad se refiere al conjunto de acciones que se realizan para identificar y eliminar las causas de los problemas y para prevenir su recurrencia.

      - -

      Guajardo[^1^] [^2^] también presenta una serie de herramientas y técnicas que facilitan la aplicación de la ACT, tales como el ciclo PDCA (Planear-Hacer-Verificar-Actuar), el diagrama de Pareto, el diagrama causa-efecto, el histograma, el diagrama de dispersión, el gráfico de control, el análisis modal de fallos y efectos (AMFE), el benchmarking, el brainstorming, el QFD (Despliegue de la Función Calidad), entre otras.

      - -

      Finalmente, Guajardo[^1^] [^2^] destaca los beneficios que puede obtener una organización que implementa la ACT, tales como: mayor satisfacción y fidelización de los clientes, mayor productividad y rentabilidad, mayor innovación y creatividad, mayor motivación y compromiso del personal, mayor imagen y reputación corporativa, mayor responsabilidad social y ambiental, entre otros.

      - -

      En conclusión, la Administración de la Calidad Total según Edmundo Guajardo es un enfoque gerencial que busca lograr el éxito a largo plazo mediante la satisfacción del cliente y los beneficios para todos los miembros de la organización y para la sociedad. Para ello, se basa en los conceptos y enseñanzas de los grandes maestros de la calidad y propone un modelo, unas herramientas y unas técnicas que facilitan su aplicación.

      -

      Administracion De La Calidad Total libro pdf Edmundo Guajardo
      -Edmundo Guajardo Pdfl Administracion De La Calidad Total descargar
      -Administracion De La Calidad Total Edmundo Guajardo Pdfl gratis
      -Administracion De La Calidad Total Edmundo Guajardo Pdfl online
      -Administracion De La Calidad Total Edmundo Guajardo Pdfl resumen
      -Administracion De La Calidad Total Edmundo Guajardo Pdfl capitulo 1
      -Administracion De La Calidad Total Edmundo Guajardo Pdfl segunda edicion
      -Administracion De La Calidad Total Edmundo Guajardo Pdfl indice
      -Administracion De La Calidad Total Edmundo Guajardo Pdfl introduccion
      -Administracion De La Calidad Total Edmundo Guajardo Pdfl bibliografia
      -Administracion De La Calidad Total por Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total de Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total segun Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total concepto Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total definicion Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total ejemplos Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total ventajas y desventajas Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total principios Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total herramientas Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total metodologia Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total historia Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total caracteristicas Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total objetivos Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total beneficios Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total aplicaciones Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total modelos Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total enfoques Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total estrategias Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total funciones Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total etapas Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total procesos Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total elementos Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total factores Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total criterios Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total indicadores Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total normas Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total tipos Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total niveles Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total dimensiones Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total atributos Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total requisitos Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total evaluacion Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total auditoria Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total control Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total mejora continua Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total innovacion Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total liderazgo Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total cultura organizacional Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total satisfaccion del cliente Edmundo Guajardo Pdfl
      -Administracion De La Calidad Total casos practicos Edmundo Guajardo Pdfl

      e753bf7129
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Amigaos 4 1 Final Edition Iso.md b/spaces/tialenAdioni/chat-gpt-api/logs/Amigaos 4 1 Final Edition Iso.md deleted file mode 100644 index 4ed84cb13884737b937f31ca59e4d551e41b2092..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Amigaos 4 1 Final Edition Iso.md +++ /dev/null @@ -1,19 +0,0 @@ - -

      How to Download and Install AmigaOS 4.1 Final Edition ISO

      -

      AmigaOS 4.1 Final Edition is the latest version of the classic Amiga operating system, which has been ported to run on modern PowerPC hardware. It offers a fast, stable and feature-rich experience for Amiga enthusiasts and retro-computing fans. In this article, we will show you how to download and install AmigaOS 4.1 Final Edition ISO on your compatible computer.

      -

      Amigaos 4 1 Final Edition Iso


      Download Filehttps://urlcod.com/2uK54N



      -

      What is AmigaOS 4.1 Final Edition?

      -

      AmigaOS 4.1 Final Edition is the culmination of more than a decade of development by Hyperion Entertainment CVBA, based on the original source code of AmigaOS 3.1. It has been updated with new features, such as a unified graphics library with RTG support, a new console, a much improved DOS, Intuition and Workbench, and support for various hardware platforms, such as Sam, AmigaONE X1000, X5000 and A1222[^2^] [^3^].

      -

      AmigaOS 4.1 Final Edition also includes powerful applications for web browsing, desktop publishing, 3D animation, video playback, music listening and more[^1^]. It is the authentic Amiga experience with the original Amiga look and feel, but with modern functionality and performance.

      -

      How to Download AmigaOS 4.1 Final Edition ISO?

      -

      To download AmigaOS 4.1 Final Edition ISO, you need to purchase a license from Hyperion Entertainment CVBA or one of its authorized resellers. The license includes a physical CD-ROM with the installation media and a serial number that you need to activate your copy of AmigaOS 4.1 Final Edition.

      -

      If you have already purchased a license, you can also download the ISO images from Hyperion's website[^3^]. You need to register your serial number on their website and then you can access the download section. There are separate ISO images for each supported hardware platform, so make sure you download the correct one for your computer.

      -

      -

      How to Install AmigaOS 4.1 Final Edition ISO?

      -

      To install AmigaOS 4.1 Final Edition ISO, you need to burn the ISO image to a CD-ROM or mount it as a virtual drive on your computer. You also need to prepare a partition on your hard drive or SSD that is formatted with SFS or FFS file system and has at least 500 MB of free space.

      -

      Then, you need to boot from the CD-ROM or the virtual drive and follow the instructions on the screen. The installation process will guide you through the steps of selecting your language, keyboard layout, screen mode, network settings and other options. You can also customize your installation by choosing which components and applications you want to install.

      -

      After the installation is complete, you need to reboot your computer and enjoy your new AmigaOS 4.1 Final Edition system.

      -

      Conclusion

      -

      AmigaOS 4.1 Final Edition is a great way to relive the glory days of the Amiga or discover its unique charm for the first time. It is a modern operating system that runs on PowerPC hardware and offers a fast, stable and feature-rich experience for Amiga fans. If you want to download and install AmigaOS 4.1 Final Edition ISO on your computer, you need to purchase a license from Hyperion Entertainment CVBA or one of its authorized resellers and follow the steps in this article.

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Blue Man Group Soundset For MONTAGE And Motif X7L.md b/spaces/tialenAdioni/chat-gpt-api/logs/Blue Man Group Soundset For MONTAGE And Motif X7L.md deleted file mode 100644 index a55e126038c5bf19b14b16718c1f44d3f6a4cafa..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Blue Man Group Soundset For MONTAGE And Motif X7L.md +++ /dev/null @@ -1,29 +0,0 @@ -
      -

      How to Create Amazing Sounds with the Blue Man Group Soundset for MONTAGE and Motif X7L

      -

      If you are a fan of the Blue Man Group, you know how they create unique and captivating sounds with their custom-made instruments. From the PVC pipes to the Piano Smasher, their percussive and melodic sounds are unlike anything else.

      -

      Blue Man Group Soundset for MONTAGE and Motif X7L


      DOWNLOADhttps://urlcod.com/2uK62E



      -

      But what if you could recreate those sounds on your own synthesizer? What if you could use them in your own music production or live performance?

      -

      Well, now you can, thanks to the Blue Man Group Soundset for MONTAGE and Motif X7L. This is an exclusive synth voice library that Yamaha has partnered with the Blue Man Group to create, and it contains all of their signature sounds and more.

      -

      The Blue Man Group Soundset for MONTAGE and Motif X7L is the first synth voice library of its kind for the MONTAGE synthesizer, and it takes full advantage of the MONTAGE Super Knob. This knob allows you to control multiple sound parameters at once in multiple directions, depths, and degrees, giving you endless possibilities for sound design and expression.

      -

      The soundset includes 16 performances that cover a wide range of styles and genres, from ambient to techno. You can use them as they are or tweak them to your liking. You can also mix and match different sounds from different performances to create your own combinations.

      -

      -

      Some of the sounds you will find in the soundset are:

      -
        -
      • PVC: The classic Blue Man Group sound, played by all three members simultaneously.
      • -
      • Drumulum & Tubulum: Two types of drums made from PVC pipes of different sizes.
      • -
      • Big Drums & Smasher: The huge drums that produce thunderous sounds, and the Piano Smasher that crushes a piano with a hammer.
      • -
      • Bellular Tubes: A set of tubes that produce bell-like tones when struck.
      • -
      • Motor Mouth: A vocal effect that mimics the sound of a motorbike.
      • -
      • Spin Painting: A sound that simulates the spinning of a canvas with paint on it.
      • -
      -

      And many more!

      -

      The Blue Man Group Soundset for MONTAGE and Motif X7L is a must-have for any synth enthusiast who wants to explore new sonic territories and have fun with their instrument. It is available for download from Yamaha MusicSoft for $49.99 USD.

      -

      Don't miss this opportunity to get your hands on this unique and exclusive soundset. Download it today and unleash your creativity with the Blue Man Group Soundset for MONTAGE and Motif X7L!

      - -

      If you want to learn more about the Blue Man Group and their instruments, you can visit their official website at www.blueman.com. There you can find information about their shows, their history, their music, and their educational programs. You can also watch videos of their performances and behind-the-scenes footage.

      -

      You can also follow them on social media platforms such as Facebook, Twitter, Instagram, and YouTube. There you can interact with them and other fans, get updates on their latest news and events, and participate in contests and giveaways.

      -

      The Blue Man Group is one of the most innovative and entertaining groups in the world. Their shows are a blend of music, comedy, art, and technology that appeal to audiences of all ages and backgrounds. They have performed in over 20 countries and have won numerous awards and accolades.

      -

      But you don't have to travel far to experience their amazing sounds. With the Blue Man Group Soundset for MONTAGE and Motif X7L, you can bring them to your own studio or stage. Whether you want to create original music or cover their songs, you will have everything you need to sound like the Blue Man Group.

      -

      So what are you waiting for? Download the Blue Man Group Soundset for MONTAGE and Motif X7L today and join the blue revolution!

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Combat Wings Battle Of Britain Download For Pc [Ativador] - Stunning Graphics and Special Effects in a Classic Aerial Combat Game.md b/spaces/tialenAdioni/chat-gpt-api/logs/Combat Wings Battle Of Britain Download For Pc [Ativador] - Stunning Graphics and Special Effects in a Classic Aerial Combat Game.md deleted file mode 100644 index 92a4a7341b8e03c98dc78644d50b2a8b26eacff6..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Combat Wings Battle Of Britain Download For Pc [Ativador] - Stunning Graphics and Special Effects in a Classic Aerial Combat Game.md +++ /dev/null @@ -1,75 +0,0 @@ -
      -

      How to Download and Activate Combat Wings: Battle of Britain for PC

      -

      Combat Wings: Battle of Britain is a thrilling flight simulator game that lets you experience the aerial warfare of World War II. You can fly over 25 authentic planes, engage in dogfights, bomb targets, and defend the skies from the Luftwaffe. But how can you download and activate this game for your PC? Here are the steps you need to follow:

      -

      Combat Wings: Battle Of Britain Download For Pc [Ativador]


      Download File ✑ ✑ ✑ https://urlcod.com/2uK5xF



      -
        -
      1. Go to the official website of Combat Wings: Battle of Britain and click on the "Buy Now" button. You will be redirected to a secure payment page where you can choose your preferred payment method and complete your purchase.
      2. -
      3. After you have completed your payment, you will receive an email with your activation code and a download link for the game. Click on the download link and save the game installer on your computer.
      4. -
      5. Run the game installer and follow the instructions on the screen. You will need to choose a destination folder for the game and agree to the terms and conditions.
      6. -
      7. When the installation is finished, launch the game from your desktop or start menu. You will be prompted to enter your activation code. Copy and paste the code from your email and click on "Activate".
      8. -
      9. Congratulations! You have successfully downloaded and activated Combat Wings: Battle of Britain for your PC. You can now enjoy the game and relive the history of the Battle of Britain.
      10. -
      -

      If you have any questions or issues with the game, you can contact the customer support team via email or phone. They will be happy to assist you with any problem you may encounter.

      -

      Combat Wings: Battle of Britain is a game that will challenge your skills and immerse you in the atmosphere of WWII. Don't miss this opportunity to download and activate it for your PC today!

      - -

      What are the features of Combat Wings: Battle of Britain?

      -

      Combat Wings: Battle of Britain is a game that offers you a realistic and exciting flight simulation experience. Here are some of the features that make this game stand out:

      -

      How to download Combat Wings: Battle Of Britain for PC free
      -Combat Wings: Battle Of Britain PC game review
      -Combat Wings: Battle Of Britain activation code generator
      -Best flight simulator games like Combat Wings: Battle Of Britain
      -Combat Wings: Battle Of Britain system requirements and compatibility
      -Combat Wings: Battle Of Britain gameplay and features
      -Combat Wings: Battle Of Britain cheats and tips
      -Combat Wings: Battle Of Britain multiplayer and online mode
      -Combat Wings: Battle Of Britain mods and customizations
      -Combat Wings: Battle Of Britain soundtrack and sound effects
      -Combat Wings: Battle Of Britain historical accuracy and realism
      -Combat Wings: Battle Of Britain graphics and performance
      -Combat Wings: Battle Of Britain patch notes and updates
      -Combat Wings: Battle Of Britain trailer and screenshots
      -Combat Wings: Battle Of Britain steam key giveaway
      -Combat Wings: Battle Of Britain vs other combat wings games
      -Combat Wings: Battle Of Britain controller support and settings
      -Combat Wings: Battle Of Britain download size and installation time
      -Combat Wings: Battle Of Britain steam achievements and trading cards
      -Combat Wings: Battle Of Britain price and discounts
      -Combat Wings: Battle Of Britain demo and free trial
      -Combat Wings: Battle Of Britain refund policy and customer service
      -Combat Wings: Battle Of Britain alternatives and similar games
      -Combat Wings: Battle Of Britain developer and publisher information
      -Combat Wings: Battle Of Britain release date and availability
      -Combat Wings: Battle Of Britain minimum and recommended specs
      -Combat Wings: Battle Of Britain crack and torrent download
      -Combat Wings: Battle Of Britain missions and campaigns
      -Combat Wings: Battle Of Britain planes and weapons
      -Combat Wings: Battle Of Britain difficulty and challenge levels
      -Combat Wings: Battle Of Britain bugs and glitches
      -Combat Wings: Battle Of Britain ratings and user reviews
      -Combat Wings: Battle Of Britain guides and walkthroughs
      -Combat Wings: Battle Of Britain forums and communities
      -Combat Wings: Battle Of Britain VR support and compatibility
      -Combat Wings: Battle Of Britain DLCs and expansions
      -Combat Wings: Battle Of Britain editor and modding tools
      -Combat Wings: Battle Of Britain lore and backstory
      -Combat Wings: Battle Of Britain comparison with real life events
      -Combat Wings: Battle Of Britain steam workshop and user creations
      -How to play Combat Wings: Battle Of Britain on Mac or Linux
      -How to stream or record Combat Wings: Battle Of Britain gameplay videos
      -How to improve your skills in Combat Wings: Battle Of Britain
      -How to fix common issues in Combat Wings: Battle Of Britain
      -How to get the best deals on Combat Wings: Battle Of Britain
      -How to uninstall or reinstall Combat Wings: Battle Of Britain
      -How to contact the developers of Combat Wings: Battle Of Britain
      -How to join or host a server in Combat Wings: Battle Of Britain
      -How to customize your plane in Combat Wings: Battle Of Britain

      -
        -
      • You can fly over 25 different planes, each with their own characteristics and performance. You can choose from fighters, bombers, and recon planes, such as the Spitfire, the Hurricane, the Lancaster, and the Stuka.
      • -
      • You can take part in various missions that recreate the historical events of the Battle of Britain. You can escort bombers, intercept enemy planes, attack ground targets, and more.
      • -
      • You can customize your plane with different weapons, skins, and decals. You can also adjust the difficulty level and the realism settings to suit your preferences.
      • -
      • You can enjoy stunning graphics and sound effects that bring the game to life. You can see the detailed landscapes of England and France, the realistic weather effects, and the spectacular explosions and fire effects.
      • -
      • You can play solo or with your friends in multiplayer mode. You can join online matches or create your own server and invite your friends. You can also chat with other players using the voice chat feature.
      • -
      -

      Combat Wings: Battle of Britain is a game that will keep you entertained for hours. If you love flight simulators and WWII history, you should definitely download and activate this game for your PC.

      e753bf7129
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Recover Your Files Safely and Legally with Wondershare Recoverit Data Recovery.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Recover Your Files Safely and Legally with Wondershare Recoverit Data Recovery.md deleted file mode 100644 index 3b80a43d2748cdf068d23c654952b4d37385ba32..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Recover Your Files Safely and Legally with Wondershare Recoverit Data Recovery.md +++ /dev/null @@ -1,27 +0,0 @@ -
      -

      How to Recover My Files 5.1 0 with Crack Free Download

      -

      Recover My Files is a popular data recovery software that can help you recover deleted or lost files from various storage devices, such as hard drives, memory cards, USB flash drives, etc. However, the full version of Recover My Files is not free, and you need to purchase a license key to use all its features and functions. Some people may try to find a crack version of Recover My Files 5.1 0 online, hoping to get the software for free. However, this is not a wise choice, as using a cracked software may bring you more troubles than benefits. In this article, we will explain why you should avoid using Recover My Files 5.1 0 with crack free download, and how to recover your files safely and legally with a reliable alternative.

      -

      Why You Should Avoid Using Recover My Files 5.1 0 with Crack Free Download

      -

      Using a cracked software may seem tempting, as you can save some money and get the software for free. However, there are many risks and disadvantages of using Recover My Files 5.1 0 with crack free download, such as:

      -

      recover my files 5.1 0 with crack free download


      DOWNLOAD · https://urlcod.com/2uK8zj



      -
        -
      • It may be illegal: Downloading and using a cracked software may violate the copyright law and the terms of service of the software developer. You may face legal consequences if you are caught using a pirated software.
      • -
      • It may be unsafe: Downloading and installing a cracked software may expose your computer to viruses, malware, spyware, or ransomware. These malicious programs may damage your system, steal your personal information, encrypt your files, or demand a ransom for decryption.
      • -
      • It may be unreliable: Using a cracked software may cause errors, crashes, or failures during the data recovery process. You may lose your files permanently or get corrupted or incomplete results. You may also miss some important updates or patches that can improve the performance and stability of the software.
      • -
      • It may be unsupported: Using a cracked software may not get any technical support or customer service from the software developer. You may not be able to solve any problems or issues that you encounter while using the software.
      • -
      -

      How to Recover Your Files Safely and Legally with a Reliable Alternative

      -

      If you want to recover your files safely and legally, you should avoid using Recover My Files 5.1 0 with crack free download, and choose a reliable alternative instead. One of the best alternatives is Wondershare Recoverit Data Recovery, which is a powerful and professional data recovery software that can help you recover deleted or lost files from various storage devices in different scenarios. Here are some of the features and advantages of Wondershare Recoverit Data Recovery:

      -
        -
      • It is legal: Wondershare Recoverit Data Recovery is a legitimate software that respects the intellectual property rights of the software developer. You can use it without worrying about any legal issues.
      • -
      • It is safe: Wondershare Recoverit Data Recovery is a virus-free and malware-free software that does not harm your computer or data. You can download it from the official website or other trusted sources.
      • -
      • It is reliable: Wondershare Recoverit Data Recovery has a high success rate and can recover over 1000 types of files from various storage devices in different scenarios. It can also preview the files before recovery and recover them selectively.
      • -
      • It is supported: Wondershare Recoverit Data Recovery has a friendly and professional customer service team that can provide you with 24/7 technical support and assistance. You can also access online tutorials, FAQs, and guides on how to use the software.
      • -
      -

      How to Use Wondershare Recoverit Data Recovery to Recover Your Files

      -

      To use Wondershare Recoverit Data Recovery to recover your files, you can follow these simple steps:

      -
        -
      1. Download and install Wondershare Recoverit Data Recovery on your computer from https://recoverit.wondershare.com/.
      2. -
      3. Launch the software and select a location where you lost your files. Click "Start" to scan the location for

        ddb901b051
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/9Game The Best Source of Downloadable Games for PC in 2018.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/9Game The Best Source of Downloadable Games for PC in 2018.md deleted file mode 100644 index 0329a3f45ddd4f962232ce3619dfa669a4830472..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/9Game The Best Source of Downloadable Games for PC in 2018.md +++ /dev/null @@ -1,77 +0,0 @@ - -

        9game Download 2018: How to Get the Best Games for Your Android Device

        -

        If you are a fan of mobile gaming, you might have heard of 9game. It is a popular app store that offers a wide range of games for Android users. Whether you are looking for action, adventure, puzzle, racing, sports, or casual games, you can find them all on 9game. But what makes 9game different from other app stores? And how can you download it on your device? In this article, we will answer these questions and more.

        -

        9game download 2018


        Download 🔗 https://bltlly.com/2uOj1C



        -

        What is 9game?

        -

        9game is a kind of casual app store for Android devices. It was launched in 2013 by UCWeb, a subsidiary of Alibaba Group. According to its official website, 9game has over 100 million users and more than 10,000 games in its library. Some of the features that make 9game stand out are:

        -
          -
        • It provides free and unlimited downloads of games.
        • -
        • It updates its games regularly and adds new ones every day.
        • -
        • It offers exclusive and original games that are not available on other platforms.
        • -
        • It supports multiple languages, including English, Hindi, Indonesian, Arabic, Thai, Vietnamese, and more.
        • -
        • It has a user-friendly interface and a smart search function.
        • -
        • It gives recommendations based on your preferences and habits.
        • -
        • It allows you to rate, review, and share games with other users.
        • -
        • It has a dedicated customer service team that responds to your queries and feedback.
        • -
        -

        Why download 9game in 2018?

        -

        If you are wondering why you should download 9game in 2018, here are some of the benefits that you can enjoy as an Android user:

        -

        A large and diverse collection of games

        -

        One of the main reasons to download 9game is that it has a huge and varied selection of games for you to choose from. You can find games from different genres, categories, themes, and styles. You can also find games from different developers, publishers, and regions. Whether you want to play popular games like PUBG Mobile, Clash of Clans, Candy Crush Saga, or Subway Surfers, or discover new and unique games like Zombie Catchers, Cooking Fever, or Ludo King, you can find them all on 9game.

        -

        A fast and easy download process

        -

        Another reason to download 9game is that it has a fast and easy download process. You don't need to sign up or register to use 9game. You just need to visit its official website or scan the QR code on your device. Then you can browse through the games and click on the download button. The download speed is fast and stable. You can also pause and resume downloads at any time. You don't need to worry about wasting your data or storage space.

        -

        A safe and secure platform

        -

        A third reason to download 9game is that it is a safe and secure platform. You don't need to worry about viruses, malware, or spyware when you download games from 9game. All the games are tested and verified by 9game's team of experts. They also monitor and remove any harmful or inappropriate content from the platform. You can also report any issues or problems that you encounter while using 9game. Your privacy and security are protected by 9game's policies and encryption.

        -

        A personalized and interactive experience

        -

        A fourth reason to download 9game is that it offers a personalized and interactive experience. You can customize your profile and settings on 9game according to your preferences. You can also create your own game collections and playlists. You can follow your favorite games and developers and get notified of the latest updates and news. You can also join the 9game community and interact with other users. You can chat, comment, like, share, and play games with them. You can also participate in various events, contests, and rewards on 9game.

        -

        How to download 9game in 2018?

        -

        If you are convinced by the benefits of downloading 9game, you might be wondering how to do it. The process is simple and straightforward. Here are the steps to install 9game on your Android device:

        -

        Step 1: Visit the official website of 9game

        -

        The first step is to visit the official website of 9game. You can use any browser on your device to access it. You will see a homepage with various games and categories. You will also see a download button on the top right corner of the screen.

        -

        Step 2: Click on the download button

        -

        The second step is to click on the download button. This will start the download of the 9game APK file on your device. The file size is about 10 MB, so it won't take long to finish.

        -

        9game apk download 2018
        -9game store download for android 2018
        -9game app free download 2018
        -9game download latest version 2018
        -9game download for pc windows 10 2018
        -9game download for java mobile 2018
        -9game download for nokia x2 2018
        -9game download for samsung galaxy j2 2018
        -9game download for vivo y51l 2018
        -9game download for oppo a37f 2018
        -9game download for gionee p5l 2018
        -9game download for lenovo k3 note 2018
        -9game download for micromax q402 2018
        -9game download for lava z60s 2018
        -9game download for redmi note 4 2018
        -9game download for iphone x 2018
        -9game download for ipad mini 2018
        -9game download for huawei p20 lite 2018
        -9game download for lg g6 2018
        -9game download for sony xperia z5 2018
        -9game download for blackberry z10 2018
        -9game download for htc desire 626g+ dual sim (d626ph) firmware update android kitkat v4.4.4 (flash file) tested by gsm pagla (not free) (password protected) (contact me) (skype id: md.rafique) (whatsapp: +8801911564135) (email: rafiquepagla@gmail.com) (facebook: https://www.facebook.com/rafiquepagla) (twitter: https://twitter.com/rafiquepagla) (instagram: https://www.instagram.com/rafiquepagla/) (youtube: https://www.youtube.com/channel/UCn7dFZiZsTKnLHJQufw_GhA) (blog: https://gsm-pagla.blogspot.com/) (website: https://gsm-pagla.com/) (forum: https://forum.gsm-pagla.com/) (telegram: https://t.me/gsmpagla) (pinterest: https://www.pinterest.com/gsmpagla/) (linkedin: https://www.linkedin.com/in/gsmpagla/) (reddit: https://www.reddit.com/user/gsmpagla/) (tumblr: https://gsmpagla.tumblr.com/) (quora: https://www.quora.com/profile/GSM-PAGLA) (medium: https://medium.com/@gsmpagla) (snapchat: gsmpagla) (tiktok: gsmpagla) (wechat: gsmpagla) (line: gsmpagla) (viber: gsmpagla) (imo: gsmpagla) (kik: gsmpagla) (bigo live: gsmpagla) (hike messenger: gsmpagla) (sharechat: gsmpagla) (helotalk: gsmpagla) (ropeo chat: gsmpagla) (likee video: gsmpagla) (kwai video: gsmpagla) (vidmate video downloader and live tv app free install and enjoy unlimited videos and music with vidmate hd video downloader and live tv app now available on android and ios devices get it from google play store or apple app store or visit official website www.vidmate.com): gsmpagla)
        -How to install and play games from 9game store on android device in easy steps tutorial guide video by tech guruji watch now and subscribe to his channel for more amazing tips and tricks on technology gadgets apps software hardware hacking cracking rooting flashing unlocking repairing updating customizing optimizing enhancing boosting speeding cleaning fixing solving troubleshooting debugging diagnosing scanning protecting securing backing up restoring recovering deleting formatting partitioning cloning mirroring copying pasting cutting renaming moving downloading uploading streaming sharing transferring sending receiving importing exporting editing cropping resizing rotating flipping reversing inverting converting compressing decompressing extracting zipping unzipping archiving encrypting decrypting signing verifying hashing checksumming checksum verifying checksum hash verifying checksum hash check verifying checksum hash check sum verifying checksum hash check sum verification verifying checksum hash check sum verification tool verifying checksum hash check sum verification tool online verifying checksum hash check sum verification tool online free verifying checksum hash check sum verification tool online free no survey no human verification no captcha no ads no malware no virus no spam no scam no fraud no phishing no hacking no cracking no rooting no flashing no unlocking no repairing no updating no customizing no optimizing no enhancing no boosting no speeding no

        -

        Step 3: Allow unknown sources on your device settings

        -

        The third step is to allow unknown sources on your device settings. This is necessary because 9game is not available on Google Play Store, so you need to enable this option to install it from other sources. To do this, go to your device settings, then security, then unknown sources, and turn it on.

        -

        Step 4: Open the downloaded file and install it

        -

        The fourth and final step is to open the downloaded file and install it. You can find the file in your downloads folder or notification bar. Tap on it and follow the instructions on the screen. It will take a few seconds to complete the installation.

        -

        Conclusion

        -

        9game is a great app store for Android users who love gaming. It has a large and diverse collection of games, a fast and easy download process, a safe and secure platform, and a personalized and interactive experience. You can download 9game in 2018 by following four simple steps: visit the official website, click on the download button, allow unknown sources, and open the file and install it. If you want to enjoy the best games for your Android device, don't hesitate to download 9game today!

        -

        FAQs

        -
          -
        • Is 9game free?
        • -
        • Yes, 9game is free to use and download. You don't need to pay anything to access its games.
        • -
        • Is 9game compatible with all Android devices?
        • -
        • Yes, 9game is compatible with most Android devices that run on Android 4.0 or higher.
        • -
        • How can I update my games on 9game?
        • -
        • You can update your games on 9game by going to the game page and tapping on the update button. You can also turn on automatic updates in your settings.
        • -
        • How can I uninstall 9game?
        • -
        • You can uninstall 9game by going to your device settings, then apps, then 9game, then uninstall.
        • -
        • How can I contact 9game?
        • -
        • You can contact 9game by sending an email to service@9apps.mobi or visiting their Facebook page.
        • -
        - : [http://www.9apps.com/] : [https://www.facebook.com/Official-UC-Web-India-100860424665368/]

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Black Screen Lyrical Video Status Maker - Download and Share.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Black Screen Lyrical Video Status Maker - Download and Share.md deleted file mode 100644 index dfe6eab7879117301b8b4d48359edb950f7e5757..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Black Screen Lyrical Video Status Maker - Download and Share.md +++ /dev/null @@ -1,89 +0,0 @@ -
        -

        Lyrics Status Download Black Screen: How to Create and Share Lyrical Videos on a Dark Background

        -

        Do you love listening to music and singing along to your favorite songs? Do you want to express your feelings and mood through music and share them with your friends and followers on social media? If yes, then you might be interested in creating and sharing lyrics status videos.

        -

        What is a lyrics status video?

        -

        A lyrics status video is a short video clip that shows the lyrics of a song synchronized with the audio. It is a creative way to showcase your musical taste and personality, as well as to convey a message or emotion. Lyrics status videos are very popular on social media platforms, such as WhatsApp, Instagram, Facebook, and TikTok, where you can upload them as stories or posts.

        -

        lyrics status download black screen


        Download Zip ✪✪✪ https://bltlly.com/2uOqBI



        -

        Why use a black screen for lyrics status videos?

        -

        One of the most common and trendy styles of lyrics status videos is using a black screen as the background. There are several reasons why you might want to use a black screen for your lyrics status videos, such as:

        -
          -
        • Creating contrast: A black screen makes the lyrics stand out more clearly and attractively, especially if you use bright colors and fonts for the text.
        • -
        • Saving battery: A black screen consumes less power than other colors, which can help you save battery life on your device.
        • -
        • Expressing emotions: A black screen can create a dramatic and emotional effect, which can match the mood and tone of the song. For example, you can use a black screen for sad, romantic, or motivational songs.
        • -
        -

        How to create a lyrics status video with a black screen?

        -

        Creating a lyrics status video with a black screen is not difficult if you have the right tools. One of the best and easiest tools to use is Beely, a free app that lets you create black background (black screen) lyrical video status & photo slideshow with song. Beely is India's first app that offers this feature, and it has many other advantages, such as:

        -
          -
        • A large collection of songs in different languages and genres
        • -
        • A variety of lyrics animation styles and particle effects
        • -
        • A simple and user-friendly interface
        • -
        • A fast and smooth performance
        • -
        • A low storage requirement
        • -
        -

        To create a lyrics status video with a black screen using Beely, follow these steps:

        -

        Step 1: Download and install Beely from the Google Play Store

        -

        You can find Beely by searching for "Beely Lyrics Video & Slideshow" on the Google Play Store or by clicking this link. After downloading and installing the app, open it and grant the necessary permissions.

        -

        Step 2: Select a song from the Beely library or upload your own audio file

        -

        On the home screen of the app, you will see a list of songs in different categories, such as Trending, Love, Sad, Motivational, and more. You can browse through the categories and select the song that you want to use for your lyrics status video. Alternatively, you can tap on the "My Music" icon at the bottom right corner of the screen and upload your own audio file from your device.

        -

        black screen lyrics video status maker
        -create black background lyrical video status
        -beely lyrics video & slideshow app
        -black screen status with song lyrics
        -how to make black screen lyrical video
        -download black background lyrics video status
        -best app for black screen status video
        -black screen lyrics video editor
        -black screen status with hindi song lyrics
        -black background lyrical video maker app
        -black screen status with english song lyrics
        -free black screen lyrics video downloader
        -black screen status with tamil song lyrics
        -online black screen lyrical video creator
        -black screen status with punjabi song lyrics
        -new black screen lyrics video status 2023
        -black screen status with telugu song lyrics
        -beely app download for black screen status
        -black screen status with malayalam song lyrics
        -tutorial for black screen lyrical video making
        -black screen status with marathi song lyrics
        -latest black screen lyrics video status app
        -black screen status with bengali song lyrics
        -tips and tricks for black screen status video
        -black screen status with kannada song lyrics
        -top 10 black screen lyrics video status apps
        -black screen status with gujarati song lyrics
        -best songs for black screen lyrical video status
        -black screen status with urdu song lyrics
        -advantages of black screen lyrics video status
        -black screen status with odia song lyrics
        -popular black screen lyrical video status maker
        -black screen status with bhojpuri song lyrics
        -features of beely app for black screen status
        -black screen status with rajasthani song lyrics
        -alternatives to beely app for black screen status
        -black screen status with haryanvi song lyrics
        -reviews of beely app for black screen status
        -black screen status with assamese song lyrics
        -how to use beely app for black screen status
        -black screen status with nepali song lyrics
        -problems and solutions of beely app for black screen status
        -black screen status with sindhi song lyrics
        -how to share beely app for black screen status
        -black screen status with garhwali song lyrics
        -how to update beely app for black screen status
        -black screen status with kashmiri song lyrics
        -how to delete beely app for black screen status
        -black screen status with maithili song lyrics

        -

        Step 3: Choose the lyrics animation style and adjust the timing and position of the lyrics

        -

        After selecting or uploading a song, you will see a preview of the lyrics status video with a black screen. You can tap on the "Lyrics" icon at the bottom left corner of the screen to choose from different lyrics animation styles, such as Fade, Slide, Zoom, Bounce, and more. You can also adjust the timing and position of the lyrics by dragging them on the screen. You can preview the changes by tapping on the play button.

        -

        Step 4: Add particle effects to your video and customize their color, size, and speed

        -

        To make your lyrics status video more attractive and dynamic, you can add particle effects to your video. You can tap on the "Particle" icon at the bottom center of the screen to choose from different particle effects, such as Hearts, Stars, Snowflakes, Fireworks, and more. You can also customize their color, size, and speed by using the sliders on the screen. You can preview the effects by tapping on the play button.

        -

        Step 5: Save and preview your lyrics status video with a black screen

        -

        Once you are satisfied with your lyrics status video with a black screen, you can save it by tapping on the "Save" icon at the top right corner of the screen. You will see a progress bar showing how much time it will take to save your video. After saving your video, you can preview it by tapping on the "Play" icon at the bottom right corner of the screen. You can also edit your video by tapping on the "Edit" icon at the bottom left corner of the screen.

        -

        How to share your lyrics status video with a black screen?

        -

        After creating and saving your lyrics status video with a black screen, you can share it with your friends and followers on different social media platforms. You can tap on the "Share" icon at the top right corner of the screen to see a list of options, such as WhatsApp, Instagram, Facebook, TikTok, and more. You can select the platform that you want to share your video on and follow the instructions on the screen.

        -

        Conclusion

        -

        Creating and sharing lyrics status videos with a black screen is a fun and easy way to express yourself through music and impress your social media audience. With Beely, you can create stunning lyrics status videos with a black screen in minutes using your favorite songs and adding lyrics animation styles and particle effects. Beely is a free app that offers many features and benefits for creating black background (black screen) lyrical video status & photo slideshow with song. Download Beely today and start creating your own lyrics status videos with a black screen!

        - FAQs Q: What is Beely? A: Beely is a free app that lets you create black background (black screen) lyrical video status & photo slideshow with song. Q: How can I download Beely? A: You can download Beely by searching for "Beely Lyrics Video & Slideshow" on the Google Play Store or by clicking this link. Q: How can I create a lyrics status video with a black screen using Beely? A: You can create a lyrics status video with a black screen using Beely by following these steps: - Download and install Beely from the Google Play Store - Select a song from the Beely library or upload your own audio file - Choose the lyrics animation style and adjust the timing and position of the lyrics - Add particle effects to your video and customize their color, size, and speed - Save and preview your lyrics status video with a black screen Q: How can I share my lyrics status video with a black screen using Beely? A: You can share your lyrics status video with a black screen using Beely by tapping on the "Share" icon at the top right corner of the screen and selecting the platform that you want to share your video on. Q: What are some of the benefits of using a black screen for lyrics status videos A: Some of the benefits of using a black screen for lyrics status videos are: - Creating contrast: A black screen makes the lyrics stand out more clearly and attractively, especially if you use bright colors and fonts for the text. - Saving battery: A black screen consumes less power than other colors, which can help you save battery life on your device. - Expressing emotions: A black screen can create a dramatic and emotional effect, which can match the mood and tone of the song. For example, you can use a black screen for sad, romantic, or motivational songs. I hope you enjoyed reading this article and learned how to create and share lyrics status videos with a black screen using Beely. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your time and attention.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cookie Run Kingdom APK - Hng dn ti v chi tr chi min ph trn Android.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cookie Run Kingdom APK - Hng dn ti v chi tr chi min ph trn Android.md deleted file mode 100644 index 2c29c67a2ba1f5784a4acba6bd9ab3494be2cce0..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Cookie Run Kingdom APK - Hng dn ti v chi tr chi min ph trn Android.md +++ /dev/null @@ -1,124 +0,0 @@ -
        -

        Cookie Run: Kingdom APK Mới Nhất - Trò Chơi Nhập Vai Xây Dựng Vương Quốc Ngọt Ngào

        -

        Bạn là một người yêu thích các trò chơi nhập vai kết hợp xây dựng? Bạn muốn tìm một trò chơi vừa đẹp mắt, vừa hấp dẫn, vừa dễ thương? Bạn muốn khám phá một thế giới kỳ diệu của những chiếc bánh quy sống động? Nếu câu trả lời là có, thì bạn không nên bỏ qua Cookie Run: Kingdom - một trò chơi mới nhất của Devsisters Corporation, nhà phát triển nổi tiếng với loạt game Cookie Run.

        -

        Giới thiệu về Cookie Run: Kingdom

        -

        Cookie Run: Kingdom là gì?

        -

        Cookie Run: Kingdom là một trò chơi nhập vai kết hợp xây dựng vương quốc cho thiết bị Android và iOS. Trò chơi là phần tiếp theo của loạt game Cookie Run, nổi tiếng với lối chơi chạy không ngừng để trốn khỏi lò nướng. Trong Cookie Run: Kingdom, bạn sẽ không chỉ chạy, mà còn chiến đấu, xây dựng và khám phá một thế giới ngọt ngào của những chiếc bánh quy.

        -

        cookie run kingdom apk mới nhất


        Download Zip ✒ ✒ ✒ https://bltlly.com/2uOlGq



        -

        Cookie Run: Kingdom có những tính năng gì đặc sắc?

        -

        Cookie Run: Kingdom có rất nhiều tính năng hấp dẫn, trong đó có:

        -
          -
        • Những chiếc bánh quy yêu thích của mọi người: Bạn sẽ gặp lại những nhân v

          ật của Cookie Run, như GingerBrave, Strawberry Cookie, Milk Cookie, Espresso Cookie và nhiều hơn nữa. Mỗi chiếc bánh quy có một ngoại hình, tính cách và kỹ năng riêng biệt. Bạn có thể thu thập, nâng cấp và tùy biến các chiếc bánh quy theo ý thích của mình.

        • -
        • Một vương quốc ngọt ngào để xây dựng: Bạn sẽ có nhiệm vụ xây dựng và phát triển một vương quốc cho các chiếc bánh quy của mình. Bạn có thể xây dựng các công trình như nhà ở, nhà hàng, cửa hàng, trường học, bệnh viện và nhiều hơn nữa. Bạn cũng có thể trang trí vương quốc của mình với các vật phẩm ngộ nghĩnh và đáng yêu.
        • -
        • Những cuộc phiêu lưu kỳ thú: Bạn sẽ khám phá một thế giới rộng lớn của Cookie Run: Kingdom, với nhiều khu vực khác nhau, từ rừng rậm, sa mạc, tuyết trắng đến không gian. Bạn sẽ gặp gỡ và chiến đấu với nhiều loại kẻ thù, từ những con sâu bọ, nhện đến những con rồng lửa. Bạn cũng sẽ tham gia vào những câu chuyện hài hước và lãng mạn của các chiếc bánh quy.
        • -
        • Những trận đánh độc đáo: Bạn sẽ lựa chọn một đội hình gồm 5 chiếc bánh quy để chiến đấu trong các trận đánh theo lượt. Mỗi chiếc bánh quy có một vai trò khác nhau, như tấn công, phòng thủ, hỗ trợ hay khống chế. Bạn cần phối hợp các kỹ năng của các chiếc bánh quy để tạo ra những combo tuyệt vời. Bạn cũng có thể thách đấu với các người chơi khác trên toàn thế giới để so tài và leo hạng.
        • -
        • Đồ họa và âm thanh tuyệt đẹp: Cookie Run: Kingdom có đồ họa 3D chibi cực kỳ dễ thương và sinh động. Màu sắc của trò chơi rực rỡ và tươi sáng, tạo ra một không khí vui nhộn và ngọt ngào. Âm thanh của trò chơi cũng rất sống động và phù hợp với từng khu vực và hoạt động. Bạn sẽ cảm thấy như đang sống trong một câu chuyện cổ tích của các chiếc bánh quy.
        • -
        -

        Cách tải và cài đặt Cookie Run: Kingdom APK mới nhất

        -

        Bạn có thể tải và cài đặt Cookie Run: Kingdom APK mới nhất từ hai nguồn chính là APKCombo và Google Play. Sau đây là cách tải và cài đặt Cookie Run: Kingdom APK từ hai nguồn này.

        -

        Yêu cầu hệ thống và quyền truy cập

        -

        Trước khi tải và cài đặt Cookie Run: Kingdom APK mới nhất, bạn cần kiểm tra xem thiết bị của bạn có đáp ứng được yêu cầu hệ th

        ống và quyền truy cập của trò chơi. Theo thông tin từ APKCombo, yêu cầu hệ thống và quyền truy cập của Cookie Run: Kingdom APK mới nhất là:

        - - - - - - - - - -
        Yêu cầu hệ thốngQuyền truy cập
        - Android 4.4 trở lên
        - RAM 2 GB trở lên
        - Bộ nhớ trống 1.5 GB trở lên
        - Truy cập vào mạng Internet
        - Truy cập vào bộ nhớ thiết bị
        - Truy cập vào thông tin tài khoản Google
        - Truy cập vào rung thiết bị
        - Truy cập vào thông báo push
        -

        Bạn nên đồng ý với các quyền truy cập này để có thể chơi Cookie Run: Kingdom một cách tốt nhất.

        -

        Hướng dẫn tải và cài đặt Cookie Run: Kingdom APK từ APKCombo

        -

        APKCombo là một trang web cung cấp các file APK của nhiều ứng dụng và trò chơi cho Android. Bạn có thể tải Cookie Run: Kingdom APK mới nhất từ APKCombo theo các bước sau:

        -
          -
        1. Truy cập vào trang web của APKCombo tại địa chỉ https://apkcombo.com/
        2. -
        3. Gõ tên trò chơi Cookie Run: Kingdom vào ô tìm kiếm và nhấn Enter.
        4. -
        5. Chọn phiên bản mới nhất của Cookie Run: Kingdom APK và nhấn vào nút Download APK.
        6. -
        7. Chờ cho file APK được tải về thiết bị của bạn.
        8. -
        9. Sau khi tải xong, mở file APK và chọn Cài đặt.
        10. -
        11. Chờ cho quá trình cài đặt hoàn tất và mở Cookie Run: Kingdom để chơi.
        12. -
        -

        Hướng dẫn tải và cài đặt Cookie Run: Kingdom APK từ Google Play

        -

        Google Play là kho ứng dụng và trò chơi chính thức của Google cho Android. Bạn có thể tải Cookie Run: Kingdom APK mới nhất từ Google Play theo các bước sau:

        -
          -
        1. Mở ứng dụng Google Play trên thiết bị của bạn.
        2. -
        3. Gõ tên trò chơi Cookie Run: Kingdom vào ô tìm kiếm và nhấn Enter.
        4. -
        5. Chọn Cookie Run: Kingdom trong danh sách kết quả và nhấn vào nút Cài đặt.
        6. -
        7. Chờ cho quá trình cài đặt hoàn tất và mở Cookie Run: Kingdom để chơi.
        8. -

        Một số mẹo và thủ thuật chơi Cookie Run: Kingdom hiệu quả

        -

        Cookie Run: Kingdom là một trò chơi khá đơn giản và dễ chơi, nhưng nếu bạn muốn chơi một cách hiệu quả và nhanh chóng, bạn có thể tham khảo một số mẹo và thủ thuật sau đây:

        -

        Tải cookie run kingdom apk phiên bản mới nhất
        -Cookie run kingdom apk mod vô hạn tiền và kim cương
        -Hướng dẫn cài đặt cookie run kingdom apk trên điện thoại Android
        -Cookie run kingdom apk hack không cần root máy
        -Cookie run kingdom apk obb tải về và cách giải nén
        -Cookie run kingdom apk 4.5.202 cập nhật tính năng mới
        -Cookie run kingdom apk offline chơi không cần mạng
        -Cookie run kingdom apk cho máy yếu và cấu hình thấp
        -Cookie run kingdom apk việt hóa hỗ trợ tiếng Việt
        -Cookie run kingdom apk ios tải về cho iPhone và iPad
        -Cookie run kingdom apk revdl tải về từ trang web uy tín
        -Cookie run kingdom apk andropalace tải về từ trang web nổi tiếng
        -Cookie run kingdom apk pure tải về từ trang web an toàn
        -Cookie run kingdom apk happymod tải về từ trang web chất lượng
        -Cookie run kingdom apk rexdl tải về từ trang web đáng tin cậy
        -Cookie run kingdom apk 1.02 GB dung lượng và cách giảm bớt
        -Cookie run kingdom apk google play tải về từ cửa hàng ứng dụng chính thức
        -Cookie run kingdom apk latest version download free for android
        -Cookie run kingdom apk new codes april 2023 redeem for rewards
        -Cookie run kingdom apk update 25 thg 5, 2023 what's new and how to install
        -Cookie run kingdom apk gameplay and review best rpg game of 2023
        -Cookie run kingdom apk tips and tricks how to build your cookie team and kingdom
        -Cookie run kingdom apk guide and walkthrough how to complete all quests and missions
        -Cookie run kingdom apk cheats and hacks how to get unlimited money and gems
        -Cookie run kingdom apk characters and cookies list and how to unlock them all
        -Cookie run kingdom apk guild and alliance how to join and create your own guild
        -Cookie run kingdom apk arena and pvp how to battle and win against other players
        -Cookie run kingdom apk super mayhem and guild relics how to participate and get rewards
        -Cookie run kingdom apk costumes and toppings how to customize your cookies and enhance their skills
        -Cookie run kingdom apk events and festivals how to join and get exclusive items and cookies
        -Cookie run kingdom apk story and lore how to discover the secrets of the ancient cookies and their kingdoms
        -Cookie run kingdom apk voice actors and cast who are the voices behind your favorite cookies
        -Cookie run kingdom apk fan art and wallpapers how to download and use beautiful images of the game
        -Cookie run kingdom apk memes and jokes how to have fun and laugh with the game's humor
        -Cookie run kingdom apk community and social media how to connect and interact with other fans of the game

        -

        Chọn máy chủ phù hợp với phong cách chơi của bạn

        -

        Khi bạn bắt đầu chơi Cookie Run: Kingdom, bạn sẽ được yêu cầu chọn một máy chủ để tham gia. Mỗi máy chủ có một tên và một biểu tượng khác nhau, ví dụ như máy chủ GingerBrave có biểu tượng là chiếc bánh quy gừng, máy chủ Strawberry Cookie có biểu tượng là chiếc bánh quy dâu, và còn nhiều máy chủ khác nữa. Bạn nên chọn một máy chủ phù hợp với phong cách chơi của bạn, ví dụ như nếu bạn thích chiến đấu nhiều, bạn có thể chọn máy chủ Espresso Cookie, nếu bạn thích xây dựng nhiều, bạn có thể chọn máy chủ Milk Cookie, và cứ như vậy. Bạn cũng có thể xem số lượng người chơi trên mỗi máy chủ để biết được độ sôi động của máy chủ đó.

        -

        Không nên nâng cấp các công trình quá sớm

        -

        Một trong những sai lầm thường gặp của người chơi mới là nâng cấp các công trình quá sớm. Điều này sẽ khiến bạn tiêu hao quá nhiều nguyên liệu và tiền bạc, trong khi hiệu quả của việc nâng cấp không cao. Bạn nên tập trung vào việc xây dựng các công trình cơ bản trước, như nhà ở, nhà hàng, cửa hàng, trường học, bệnh viện và các công trình khác liên quan đến sản xuất và thu nhập. Sau khi xây dựng xong các công trình cơ bản, bạn mới nên nâng cấp các công trình theo từng cấp độ.

        -

        Luôn duy trì hoạt động sản xuất

        -

        Một điều quan trọng để phát triển vương quốc của bạn là luôn duy trì hoạt động sản xuất. Bạn nên luôn kiểm tra các công trình sản xuất của bạn, như nhà hàng, cửa hàng, trường học, bệnh viện và các công trình khác để đảm bảo rằng chúng luôn hoạt động và không bị dừng lại. Bạn cũng nên thu thập các sản phẩm từ các công trình sản xuất thường xuyên để không bị lãng phí. Các sản phẩm này sẽ giúp bạn tăng thu nhập, tăng kinh nghiệm và tăng sức mạnh cho các chiếc bánh quy của bạn.

        -

        Trang bị Topping cho các Cookie để tăng sức mạnh

        -

        Topping là một loại vật phẩm có thể trang bị cho các chiếc bánh quy để tăng sức mạnh cho họ. Mỗi chiếc bánh quy có thể trang bị tối đa 4 Topping khác nhau, mỗi Topping có một hiệu ứng khác nhau, ví dụ như tăng sát thương, tăng phòng th

        ủ, tăng tốc độ, và còn nhiều hơn nữa. Bạn có thể nhận được Topping từ nhiều nguồn khác nhau, như hoàn thành các nhiệm vụ, chiến đấu, mở rương, tham gia sự kiện và cửa hàng. Bạn nên trang bị Topping phù hợp với vai trò và kỹ năng của mỗi chiếc bánh quy để tối ưu hóa hiệu quả chiến đấu.

        -

        Tham gia các sự kiện giới hạn để nhận nhiều phần thưởng hấp dẫn

        -

        Cookie Run: Kingdom thường xuyên tổ chức các sự kiện giới hạn để tăng sự hấp dẫn cho người chơi. Các sự kiện giới hạn có thể là các nhiệm vụ đặc biệt, các trận đánh đặc biệt, các cửa hàng đặc biệt hay các hoạt động đặc biệt. Bạn nên tham gia các sự kiện giới hạn để nhận được nhiều phần thưởng hấp dẫn, như Topping, Cookie, tiền bạc, nguyên liệu, vật phẩm và còn nhiều hơn nữa. Bạn cũng có thể gặp gỡ và kết bạn với các người chơi khác khi tham gia các sự kiện giới hạn.

        -

        Đánh giá về Cookie Run: Kingdom

        -

        Ưu điểm của Cookie Run: Kingdom

        -

        Cookie Run: Kingdom là một trò chơi rất đáng chơi vì có nhiều ưu điểm, như:

        -
          -
        • Lối chơi phong phú và đa dạng: Cookie Run: Kingdom kết hợp nhiều lối chơi khác nhau, từ nhập vai, xây dựng, chiến đấu, khám phá và còn nhiều hơn nữa. Bạn sẽ không bao giờ cảm thấy nhàm chán khi chơi Cookie Run: Kingdom.
        • -
        • Nhân vật dễ thương và độc đáo: Cookie Run: Kingdom có rất nhiều nhân vật dễ thương và độc đáo, mỗi chiếc bánh quy có một cá tính riêng biệt và một câu chuyện riêng biệt. Bạn sẽ cảm thấy gắn bó với các chiếc bánh quy của mình và muốn biết thêm về họ.
        • -
        • Đồ họa và âm thanh tuyệt đẹp: Cookie Run: Kingdom có đồ họa 3D chibi cực kỳ dễ thương và sinh động. Màu sắc của trò chơi rực rỡ và tươi sáng, tạo ra một không khí vui nhộn và ngọt ngào. Âm thanh của trò chơi cũng rất sống động và phù hợp với từng khu vực và hoạt động. Bạn sẽ cảm thấy như đang sống trong một câu chuyện cổ tích của các chiếc bánh quy.
        • -
        • Cộng đồng người chơi lớn mạnh: Cookie Run: Kingdom có một cộng đồng người chơi lớn mạnh trên toàn thế giới. Bạn có thể kết bạn, giao lưu, hợp tác và thi đấu với các người chơi khác trong trò chơi. Bạn cũng có thể tham gia vào các nhóm, diễn đàn, fanpage và các nền tảng mạng xã hội khác liên quan đến Cookie Run: Kingdom. Bạn sẽ không bao giờ cảm thấy cô đơn khi chơi Cookie Run: Kingdom.
        • -
        -

        Nhược điểm của Cookie Run: Kingdom

        -

        Cookie Run: Kingdom cũng có một số nhược điểm, như:

        -
          -
        • Yêu cầu hệ thống cao: Cookie Run: Kingdom là một trò chơi có đồ họa 3D đẹp mắt, nhưng cũng đòi hỏi thiết bị của bạn phải có cấu hình cao để chạy mượt mà. Nếu thiết bị của bạn không đáp ứng được yêu cầu hệ thống, bạn có thể gặp phải các vấn đề như lag, giật, nóng máy hay hao pin.
        • -
        • Cần kết nối Internet liên tục: Cookie Run: Kingdom là một trò chơi cần kết nối Internet liên tục để chơi. Nếu bạn không có kết nối Internet ổn định, bạn có thể gặp phải các vấn đề như mất kết nối, mất dữ liệu hay không thể truy cập vào trò chơi.
        • -
        • Có thể gây nghiện: Cookie Run: Kingdom là một trò chơi rất hấp dẫn và có nhiều hoạt động để làm. Tuy nhiên, nếu bạn chơi quá nhiều và quá lâu, bạn có thể gây ra các hậu quả xấu cho sức khỏe và cuộc sống của bạn. Bạn nên chơi Cookie Run: Kingdom một cách vừa phải và cân bằng với các hoạt động khác trong cuộc sống.
        • -
        -

        Kết luận

        -

        Cookie Run: Kingdom là một trò chơi nhập vai kết hợp xây dựng vương quốc ngọt ngào và hấp dẫn cho thiết bị Android và iOS. Trò chơi có nhiều tính năng đặc sắc, như những chiếc bánh quy yêu thích của mọi người, một vương quốc ngọt ngào để xây dựng, những cuộc phiêu lưu kỳ thú, những trận đánh độc đáo, đồ họa và âm thanh tuyệt đẹp và cộng đồng người chơi lớn mạnh. Trò chơi cũng có một số nhược điểm, như yêu cầu hệ thống cao, cần kết nối Internet liên tục và có thể gây nghiện. Bạn có thể tải và cài đặt Cookie Run: Kingdom APK mới nhất từ APKCombo hoặc Google Play theo các hướng dẫn đã được chia sẻ ở trên. Bạn cũng có thể tham khảo một số mẹo và thủ thuật để chơi Cookie Run: Kingdom hiệu quả. Hy vọng bài viết này sẽ giúp bạn có được những thông tin hữu ích về Cookie Run: Kingdom và có những giây phút vui vẻ khi chơi trò chơi này.

        -

        Câu hỏi thường gặp

        -

        Dưới đây là một số câu hỏi thường gặp về Cookie Run: Kingdom:

        -
          -
        1. Cookie Run: Kingdom có miễn phí không?
          Cookie Run: Kingdom là một trò chơ i chơi miễn phí cho thiết bị Android và iOS. Bạn có thể tải và cài đặt Cookie Run: Kingdom APK mới nhất từ APKCombo hoặc Google Play mà không phải trả bất kỳ khoản phí nào. Tuy nhiên, trò chơi cũng có một số vật phẩm và dịch vụ có phí trong trò chơi, bạn có thể mua chúng bằng tiền thật nếu muốn.
        2. -
        3. Cookie Run: Kingdom có an toàn không?
          Cookie Run: Kingdom là một trò chơi an toàn và không có hại cho thiết bị của bạn. Trò chơi đã được kiểm tra và xác nhận bởi các nhà cung cấp APK uy tín, như APKCombo và Google Play. Trò chơi cũng không yêu cầu các quyền truy cập nhạy cảm, như truy cập vào danh bạ, tin nhắn hay camera của bạn. Bạn có thể yên tâm khi chơi Cookie Run: Kingdom.
        4. -
        5. Cookie Run: Kingdom có hỗ trợ tiếng Việt không?
          Cookie Run: Kingdom là một trò chơi hỗ trợ nhiều ngôn ngữ khác nhau, trong đó có tiếng Việt. Bạn có thể thay đổi ngôn ngữ của trò chơi trong phần cài đặt của trò chơi. Bạn sẽ có thể hiểu và tận hưởng trò chơi một cách dễ dàng hơn khi chọn tiếng Việt làm ngôn ngữ của trò chơi.
        6. -
        7. Cookie Run: Kingdom có thể chơi offline không?
          Cookie Run: Kingdom là một trò chơi cần kết nối Internet liên tục để chơi. Bạn không thể chơi Cookie Run: Kingdom khi không có kết nối Internet. Đây là một nhược điểm của trò chơi, nhưng cũng là một điều cần thiết để đảm bảo tính năng đồng bộ hóa dữ liệu và tương tác với các người chơi khác.
        8. -
        9. Cookie Run: Kingdom có hỗ trợ chế độ nhiều người chơi không?
          Cookie Run: Kingdom là một trò chơi hỗ trợ chế độ nhiều người chơi. Bạn có thể kết bạn, giao lưu, hợp tác và thi đấu với các người chơi khác trong trò chơi. Bạn cũng có thể tham gia vào các nhóm, diễn đàn, fanpage và các nền tảng mạng xã hội khác liên quan đến Cookie Run: Kingdom. Bạn sẽ có được những trải nghiệm thú vị và mới lạ khi chơi Cookie Run: Kingdom với các người chơi khác.
        10. -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/tiedong/Goat/utils/README.md b/spaces/tiedong/Goat/utils/README.md deleted file mode 100644 index cad7751627219c8544c49aa4f04f96529938c59c..0000000000000000000000000000000000000000 --- a/spaces/tiedong/Goat/utils/README.md +++ /dev/null @@ -1,13 +0,0 @@ -# Directory for helpers modules - -## prompter.py - -Prompter class, a template manager. - -`from utils.prompter import Prompter` - -## callbacks.py - -Helpers to support streaming generate output. - -`from utils.callbacks import Iteratorize, Stream` diff --git a/spaces/timpal0l/chat-ui/src/lib/types/UrlDependency.ts b/spaces/timpal0l/chat-ui/src/lib/types/UrlDependency.ts deleted file mode 100644 index 2b085888c79606d2e553df49dd0b18a648728a7d..0000000000000000000000000000000000000000 --- a/spaces/timpal0l/chat-ui/src/lib/types/UrlDependency.ts +++ /dev/null @@ -1,4 +0,0 @@ -/* eslint-disable no-shadow */ -export enum UrlDependency { - ConversationList = "conversation:list", -} diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Driver Galletto 1260 Windows 7 6 ((LINK)).md b/spaces/tioseFevbu/cartoon-converter/scripts/Driver Galletto 1260 Windows 7 6 ((LINK)).md deleted file mode 100644 index 78b4541b7617879e932f696ac65b5fdae640bf09..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Driver Galletto 1260 Windows 7 6 ((LINK)).md +++ /dev/null @@ -1,33 +0,0 @@ -
        -

        How to Install Driver Galletto 1260 on Windows 7 6

        -

        If you are looking for a way to tune your car's ECU, you might have heard of the Galletto 1260 device. This is a simple and affordable tool that can read and write data from your car's ECU using the OBD2 port. However, before you can use it, you need to install the driver Galletto 1260 on your Windows 7 6 computer. In this article, we will show you how to do that in a few easy steps.

        -

        What is Driver Galletto 1260?

        -

        Driver Galletto 1260 is a software program that allows your computer to communicate with the Galletto 1260 device. It is compatible with Windows XP, Vista, 7, 8, and 10 operating systems. However, some users have reported issues with installing it on Windows 7 6, which is a modified version of Windows 7 that has some features removed or disabled. Therefore, you need to follow some special instructions to make it work on Windows 7 6.

        -

        Driver Galletto 1260 Windows 7 6


        Download File 🌟 https://urlcod.com/2uHvPt



        -

        How to Install Driver Galletto 1260 on Windows 7 6

        -

        To install driver Galletto 1260 on Windows 7 6, you need to follow these steps:

        -
          -
        1. Download the driver Galletto 1260 from the official website or from a trusted source. You can find the link at the end of this article.
        2. -
        3. Extract the zip file to a folder on your computer.
        4. -
        5. Connect the Galletto 1260 device to your computer using the USB cable.
        6. -
        7. Open the Device Manager on your computer. You can do this by clicking on the Start button and typing "device manager" in the search box.
        8. -
        9. Find the Galletto 1260 device under the "Other devices" category. It might be labeled as "FTDI" or "USB Serial Port".
        10. -
        11. Right-click on the device and select "Update driver software".
        12. -
        13. Choose "Browse my computer for driver software".
        14. -
        15. Navigate to the folder where you extracted the driver Galletto 1260 and select it.
        16. -
        17. Click on "Next" and follow the instructions on the screen.
        18. -
        19. Once the installation is complete, restart your computer.
        20. -
        -

        Congratulations! You have successfully installed driver Galletto 1260 on Windows 7 6. Now you can use the Galletto 1260 device to tune your car's ECU.

        -

        Where to Download Driver Galletto 1260

        -

        You can download driver Galletto 1260 from the official website of the manufacturer or from a trusted source. Here are some links that you can use:

        - -

        Please note that we are not affiliated with any of these websites and we do not guarantee their quality or safety. Use them at your own risk.

        -

        Conclusion

        -

        In this article, we have shown you how to install driver Galletto 1260 on Windows 7 6. This is a simple and affordable way to tune your car's ECU using the OBD2 port. However, you need to follow some special instructions to make it work on Windows 7 6, which is a modified version of Windows 7. We hope this article was helpful and

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/tomaseo2022/mp3-a-texto/app.py b/spaces/tomaseo2022/mp3-a-texto/app.py deleted file mode 100644 index 9d9fcc800c3ebf0fe5544a142770309195bd730e..0000000000000000000000000000000000000000 --- a/spaces/tomaseo2022/mp3-a-texto/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import whisper -from langcodes import * - -def speech_to_text(tmp_filename, uploaded, model_size): - model = whisper.load_model(model_size) - source = uploaded if uploaded is not None else tmp_filename - result = model.transcribe(source) - return f'Idioma detectado: {Language.make(language=result["language"]).display_name()}\n\n Transcripción: {result["text"]}' - - -gr.Interface( - - title="", - thumbnail="", - css=""" - footer {visibility: hidden} - .gr-prose p{text-align: center;} - .gr-button {background: black;color: white} - """, - description="", - fn=speech_to_text, - inputs=[ - gr.Audio(label="",source="", type=""), - gr.Audio(source="upload", type="filepath", label="Upload Audio"), - gr.Dropdown(label="Select model size",value="base",choices=["tiny", "base", "small", "medium", "large"])], - outputs="text").launch() - diff --git a/spaces/tomofi/ABINet-OCR/docker/Dockerfile b/spaces/tomofi/ABINet-OCR/docker/Dockerfile deleted file mode 100644 index 76c6f00a137bfdf95da633683a4ad7ecbcf7f551..0000000000000000000000000000000000000000 --- a/spaces/tomofi/ABINet-OCR/docker/Dockerfile +++ /dev/null @@ -1,25 +0,0 @@ -FROM anibali/pytorch:cuda-9.0 -MAINTAINER fangshancheng -RUN sudo rm -rf /etc/apt/sources.list.d && \ - sudo apt update && \ - sudo apt install -y build-essential vim && \ - conda config --add channels https://mirrors.ustc.edu.cn/anaconda/pkgs/free/ && \ - conda config --add channels https://mirrors.ustc.edu.cn/anaconda/pkgs/main/ && \ - conda config --set show_channel_urls yes && \ - pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/ && \ - pip install torch==1.1.0 torchvision==0.3.0 && \ - pip install fastai==1.0.60 && \ - pip install ipdb jupyter ipython lmdb editdistance tensorboardX natsort nltk && \ - conda uninstall -y --force pillow pil jpeg libtiff libjpeg-turbo && \ - pip uninstall -y pillow pil jpeg libtiff libjpeg-turbo && \ - conda install -yc conda-forge libjpeg-turbo && \ - CFLAGS="${CFLAGS} -mavx2" pip install --no-cache-dir --force-reinstall --no-binary :all: --compile pillow-simd==6.2.2.post1 && \ - conda install -y jpeg libtiff opencv && \ - sudo rm -rf /var/lib/apt/lists/* && \ - sudo rm -rf /tmp/* && \ - sudo rm -rf ~/.cache && \ - sudo apt clean all && \ - conda clean -y -a -EXPOSE 8888 -ENV LANG C.UTF-8 -ENV LC_ALL C.UTF-8 diff --git a/spaces/tomofi/MMOCR/configs/_base_/det_models/fcenet_r50_fpn.py b/spaces/tomofi/MMOCR/configs/_base_/det_models/fcenet_r50_fpn.py deleted file mode 100644 index 3c2bd12b6295858895c53e5e1700df3962a8a7d5..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/configs/_base_/det_models/fcenet_r50_fpn.py +++ /dev/null @@ -1,33 +0,0 @@ -model = dict( - type='FCENet', - backbone=dict( - type='mmdet.ResNet', - depth=50, - num_stages=4, - out_indices=(1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='BN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'), - norm_eval=False, - style='pytorch'), - neck=dict( - type='mmdet.FPN', - in_channels=[512, 1024, 2048], - out_channels=256, - add_extra_convs='on_output', - num_outs=3, - relu_before_extra_convs=True, - act_cfg=None), - bbox_head=dict( - type='FCEHead', - in_channels=256, - scales=(8, 16, 32), - fourier_degree=5, - loss=dict(type='FCELoss', num_sample=50), - postprocessor=dict( - type='FCEPostprocessor', - text_repr_type='quad', - num_reconstr_points=50, - alpha=1.2, - beta=1.0, - score_thr=0.3))) diff --git a/spaces/tomofi/MMOCR/mmocr/utils/check_argument.py b/spaces/tomofi/MMOCR/mmocr/utils/check_argument.py deleted file mode 100644 index 34cbe8dc2658d725c328eb5cd98652633a22aa24..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/utils/check_argument.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - - -def is_3dlist(x): - """check x is 3d-list([[[1], []]]) or 2d empty list([[], []]) or 1d empty - list([]). - - Notice: - The reason that it contains 1d or 2d empty list is because - some arguments from gt annotation file or model prediction - may be empty, but usually, it should be 3d-list. - """ - if not isinstance(x, list): - return False - if len(x) == 0: - return True - for sub_x in x: - if not is_2dlist(sub_x): - return False - - return True - - -def is_2dlist(x): - """check x is 2d-list([[1], []]) or 1d empty list([]). - - Notice: - The reason that it contains 1d empty list is because - some arguments from gt annotation file or model prediction - may be empty, but usually, it should be 2d-list. - """ - if not isinstance(x, list): - return False - if len(x) == 0: - return True - - return all(isinstance(item, list) for item in x) - - -def is_type_list(x, type): - - if not isinstance(x, list): - return False - - return all(isinstance(item, type) for item in x) - - -def is_none_or_type(x, type): - - return isinstance(x, type) or x is None - - -def equal_len(*argv): - assert len(argv) > 0 - - num_arg = len(argv[0]) - for arg in argv: - if len(arg) != num_arg: - return False - return True - - -def valid_boundary(x, with_score=True): - num = len(x) - if num < 8: - return False - if num % 2 == 0 and (not with_score): - return True - if num % 2 == 1 and with_score: - return True - - return False diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/lvis_v1_instance.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/lvis_v1_instance.py deleted file mode 100644 index be791edd79495dce88d010eea63e33d398f242b0..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/datasets/lvis_v1_instance.py +++ /dev/null @@ -1,24 +0,0 @@ -# dataset settings -_base_ = 'coco_instance.py' -dataset_type = 'LVISV1Dataset' -data_root = 'data/lvis_v1/' -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - _delete_=True, - type='ClassBalancedDataset', - oversample_thr=1e-3, - dataset=dict( - type=dataset_type, - ann_file=data_root + 'annotations/lvis_v1_train.json', - img_prefix=data_root)), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/lvis_v1_val.json', - img_prefix=data_root), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/lvis_v1_val.json', - img_prefix=data_root)) -evaluation = dict(metric=['bbox', 'segm']) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco.py deleted file mode 100644 index 89a0d7b2bd83216dfc4db120fe9f610b23376681..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco.py +++ /dev/null @@ -1,41 +0,0 @@ -_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' -# model settings -model = dict( - neck=[ - dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - dict( - type='BFP', - in_channels=256, - num_levels=5, - refine_level=2, - refine_type='non_local') - ], - roi_head=dict( - bbox_head=dict( - loss_bbox=dict( - _delete_=True, - type='BalancedL1Loss', - alpha=0.5, - gamma=1.5, - beta=1.0, - loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict(sampler=dict(neg_pos_ub=5), allowed_border=-1), - rcnn=dict( - sampler=dict( - _delete_=True, - type='CombinedSampler', - num=512, - pos_fraction=0.25, - add_gt_as_proposals=True, - pos_sampler=dict(type='InstanceBalancedPosSampler'), - neg_sampler=dict( - type='IoUBalancedNegSampler', - floor_thr=-1, - floor_fraction=0, - num_bins=3))))) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/visualization/__init__.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/visualization/__init__.py deleted file mode 100644 index 4ff995c0861490941f8cfc19ebbd41a2ee7e2d65..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/visualization/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .image import (color_val_matplotlib, imshow_det_bboxes, - imshow_gt_det_bboxes) - -__all__ = ['imshow_det_bboxes', 'imshow_gt_det_bboxes', 'color_val_matplotlib'] diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/datasets/class_balance_dataset_wrapper.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/datasets/class_balance_dataset_wrapper.py deleted file mode 100644 index 2ed47b217442d316d626139217ef9fdf4687ec55..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/datasets/class_balance_dataset_wrapper.py +++ /dev/null @@ -1,73 +0,0 @@ -""" -This code is based on the following file: -https://github.com/tztztztztz/eqlv2/blob/master/mmdet/datasets/class_balance_dataset_wrapper.py -""" - -import torch -import numpy as np -from mmdet.utils import get_root_logger -from .builder import DATASETS - - -class RandomDataStream: - def __init__(self, data, generator, shuffle=True, dtype=torch.int32): - self._size = len(data) - self.data = torch.Tensor(data).to(dtype=dtype) - self._shuffle = shuffle - self.g = generator - - def __iter__(self): - yield from self._infinite_indices() - - def _infinite_indices(self): - while True: - if self._shuffle: - randperm = torch.randperm(self._size, generator=self.g) - yield from self.data[randperm] - else: - yield self.data - - -@DATASETS.register_module() -class CASDataset(object): - def __init__(self, dataset, max_iter): - self.dataset = dataset - self.max_iter = max_iter - self.num_classes = len(dataset.cat_ids) - self.CLASSES = dataset.CLASSES - - logger = get_root_logger() - logger.info(f'init CAS dataset, num_classes {self.num_classes}') - - indices = [] - flag = [] - - cls_data_inds = [[] for _ in range(self.num_classes)] - for idx in range(len(dataset)): - cat_ids = set(self.dataset.get_cat_ids(idx)) - for cat_id in cat_ids: - label = self.dataset.cat2label[cat_id] - cls_data_inds[label].append(idx) - - g = torch.Generator() - g.manual_seed(0) - cls_ind_stream = iter(RandomDataStream(list(range(self.num_classes)), g)) - cls_data_streams = [None] * self.num_classes - for i, data_inds in enumerate(cls_data_inds): - cls_data_streams[i] = iter(RandomDataStream(data_inds, g)) - - for _ in range(max_iter): - cls_idx = next(cls_ind_stream) - img_idx = next(cls_data_streams[cls_idx]) - indices.append(int(img_idx)) - flag.append(self.dataset.flag[img_idx]) - - self.indices = indices - self.flag = np.asarray(flag, dtype=np.uint8) - - def __len__(self): - return len(self.indices) - - def __getitem__(self, idx): - ori_index = self.indices[idx] - return self.dataset[ori_index] diff --git a/spaces/trysem/image-matting-app/ppmatting/models/backbone/vgg.py b/spaces/trysem/image-matting-app/ppmatting/models/backbone/vgg.py deleted file mode 100644 index 64b529bf0c3e25cb82ea4b4c31bec9ef30d2da59..0000000000000000000000000000000000000000 --- a/spaces/trysem/image-matting-app/ppmatting/models/backbone/vgg.py +++ /dev/null @@ -1,166 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import paddle -from paddle import ParamAttr -import paddle.nn as nn -import paddle.nn.functional as F -from paddle.nn import Conv2D, BatchNorm, Linear, Dropout -from paddle.nn import AdaptiveAvgPool2D, MaxPool2D, AvgPool2D - -from paddleseg.cvlibs import manager -from paddleseg.utils import utils - - -class ConvBlock(nn.Layer): - def __init__(self, input_channels, output_channels, groups, name=None): - super(ConvBlock, self).__init__() - - self.groups = groups - self._conv_1 = Conv2D( - in_channels=input_channels, - out_channels=output_channels, - kernel_size=3, - stride=1, - padding=1, - weight_attr=ParamAttr(name=name + "1_weights"), - bias_attr=False) - if groups == 2 or groups == 3 or groups == 4: - self._conv_2 = Conv2D( - in_channels=output_channels, - out_channels=output_channels, - kernel_size=3, - stride=1, - padding=1, - weight_attr=ParamAttr(name=name + "2_weights"), - bias_attr=False) - if groups == 3 or groups == 4: - self._conv_3 = Conv2D( - in_channels=output_channels, - out_channels=output_channels, - kernel_size=3, - stride=1, - padding=1, - weight_attr=ParamAttr(name=name + "3_weights"), - bias_attr=False) - if groups == 4: - self._conv_4 = Conv2D( - in_channels=output_channels, - out_channels=output_channels, - kernel_size=3, - stride=1, - padding=1, - weight_attr=ParamAttr(name=name + "4_weights"), - bias_attr=False) - - self._pool = MaxPool2D( - kernel_size=2, stride=2, padding=0, return_mask=True) - - def forward(self, inputs): - x = self._conv_1(inputs) - x = F.relu(x) - if self.groups == 2 or self.groups == 3 or self.groups == 4: - x = self._conv_2(x) - x = F.relu(x) - if self.groups == 3 or self.groups == 4: - x = self._conv_3(x) - x = F.relu(x) - if self.groups == 4: - x = self._conv_4(x) - x = F.relu(x) - skip = x - x, max_indices = self._pool(x) - return x, max_indices, skip - - -class VGGNet(nn.Layer): - def __init__(self, input_channels=3, layers=11, pretrained=None): - super(VGGNet, self).__init__() - self.pretrained = pretrained - - self.layers = layers - self.vgg_configure = { - 11: [1, 1, 2, 2, 2], - 13: [2, 2, 2, 2, 2], - 16: [2, 2, 3, 3, 3], - 19: [2, 2, 4, 4, 4] - } - assert self.layers in self.vgg_configure.keys(), \ - "supported layers are {} but input layer is {}".format( - self.vgg_configure.keys(), layers) - self.groups = self.vgg_configure[self.layers] - - # matting的第一层卷积输入为4通道,初始化是直接初始化为0 - self._conv_block_1 = ConvBlock( - input_channels, 64, self.groups[0], name="conv1_") - self._conv_block_2 = ConvBlock(64, 128, self.groups[1], name="conv2_") - self._conv_block_3 = ConvBlock(128, 256, self.groups[2], name="conv3_") - self._conv_block_4 = ConvBlock(256, 512, self.groups[3], name="conv4_") - self._conv_block_5 = ConvBlock(512, 512, self.groups[4], name="conv5_") - - # 这一层的初始化需要利用vgg fc6的参数转换后进行初始化,可以暂时不考虑初始化 - self._conv_6 = Conv2D( - 512, 512, kernel_size=3, padding=1, bias_attr=False) - - self.init_weight() - - def forward(self, inputs): - fea_list = [] - ids_list = [] - x, ids, skip = self._conv_block_1(inputs) - fea_list.append(skip) - ids_list.append(ids) - x, ids, skip = self._conv_block_2(x) - fea_list.append(skip) - ids_list.append(ids) - x, ids, skip = self._conv_block_3(x) - fea_list.append(skip) - ids_list.append(ids) - x, ids, skip = self._conv_block_4(x) - fea_list.append(skip) - ids_list.append(ids) - x, ids, skip = self._conv_block_5(x) - fea_list.append(skip) - ids_list.append(ids) - x = F.relu(self._conv_6(x)) - fea_list.append(x) - return fea_list - - def init_weight(self): - if self.pretrained is not None: - utils.load_pretrained_model(self, self.pretrained) - - -@manager.BACKBONES.add_component -def VGG11(**args): - model = VGGNet(layers=11, **args) - return model - - -@manager.BACKBONES.add_component -def VGG13(**args): - model = VGGNet(layers=13, **args) - return model - - -@manager.BACKBONES.add_component -def VGG16(**args): - model = VGGNet(layers=16, **args) - return model - - -@manager.BACKBONES.add_component -def VGG19(**args): - model = VGGNet(layers=19, **args) - return model diff --git a/spaces/ttt246/brain/Extension/utils/webserver.js b/spaces/ttt246/brain/Extension/utils/webserver.js deleted file mode 100644 index 90faaa83e5f68158b0809f0195f98e26891a01ee..0000000000000000000000000000000000000000 --- a/spaces/ttt246/brain/Extension/utils/webserver.js +++ /dev/null @@ -1,56 +0,0 @@ -// Do this as the first thing so that any code reading it knows the right env. -process.env.BABEL_ENV = 'development'; -process.env.NODE_ENV = 'development'; -process.env.ASSET_PATH = '/'; - -var WebpackDevServer = require('webpack-dev-server'), - webpack = require('webpack'), - config = require('../webpack.config'), - env = require('./env'), - path = require('path'); - -var options = config.chromeExtensionBoilerplate || {}; -var excludeEntriesToHotReload = options.notHotReload || []; - -for (var entryName in config.entry) { - if (excludeEntriesToHotReload.indexOf(entryName) === -1) { - config.entry[entryName] = [ - 'webpack/hot/dev-server', - `webpack-dev-server/client?hot=true&hostname=localhost&port=${env.PORT}`, - ].concat(config.entry[entryName]); - } -} - -delete config.chromeExtensionBoilerplate; - -var compiler = webpack(config); - -var server = new WebpackDevServer( - { - https: false, - hot: true, - liveReload: false, - client: { - webSocketTransport: 'sockjs', - }, - webSocketServer: 'sockjs', - host: 'localhost', - port: env.PORT, - static: { - directory: path.join(__dirname, '../build'), - }, - devMiddleware: { - publicPath: `http://localhost:${env.PORT}/`, - writeToDisk: true, - }, - headers: { - 'Access-Control-Allow-Origin': '*', - }, - allowedHosts: 'all', - }, - compiler -); - -(async () => { - await server.start(); -})(); diff --git a/spaces/unb-lamfo-nlp-mcti/nlp-mcti-preprocessing-single/README.md b/spaces/unb-lamfo-nlp-mcti/nlp-mcti-preprocessing-single/README.md deleted file mode 100644 index fd05b4647fd241c0c47ebbf1bc0bb930e41b12e3..0000000000000000000000000000000000000000 --- a/spaces/unb-lamfo-nlp-mcti/nlp-mcti-preprocessing-single/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: NLP Preprocessing Single -emoji: 🐠 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Alibaba Aur 40 Chor Hindi 720p Download VERIFIED.md b/spaces/usbethFlerru/sovits-modelsV2/example/Alibaba Aur 40 Chor Hindi 720p Download VERIFIED.md deleted file mode 100644 index 70b81b619b6b0af2791fba62d32e3f7959365bf5..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Alibaba Aur 40 Chor Hindi 720p Download VERIFIED.md +++ /dev/null @@ -1,12 +0,0 @@ -

        Alibaba Aur 40 Chor Hindi 720p Download


        Downloadhttps://urlcod.com/2uyUCQ



        -
        -The Adventures of Ali Baba and the Forty Thieves (1980) Alibaba Aur 40 Chor. official . Old Bollywood Movie Posters: Fading Art Gallery. Ali baba + and 40 thieves - Watch free online. -Watch Ali Baba and the 40 Thieves online: Ali Baba + and the 40 Thieves - watch free online. -Ali Baba and 40 thieves. -Ali Babai and the Forty Thieves - watch online. -Ali Baba + and the Forty Thieves - watch online. -All about the film Ali Baba + and the Forty Thieves: photos, wallpapers, user comments, sessions, . ali baba and the 40 thieves watch online in good quality. -Ali Baba and 40 thieves. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/user238921933/stable-diffusion-webui/test/__init__.py b/spaces/user238921933/stable-diffusion-webui/test/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/engine/exporter.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/engine/exporter.md deleted file mode 100644 index 37189d46fadbd0cc6fc85f3db042fdf2531c13de..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/engine/exporter.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -description: Learn how to export your YOLO model in various formats using Ultralytics' exporter package - iOS, GDC, and more. -keywords: Ultralytics, YOLO, exporter, iOS detect model, gd_outputs, export ---- - -## Exporter ---- -### ::: ultralytics.yolo.engine.exporter.Exporter -

        - -## iOSDetectModel ---- -### ::: ultralytics.yolo.engine.exporter.iOSDetectModel -

        - -## export_formats ---- -### ::: ultralytics.yolo.engine.exporter.export_formats -

        - -## gd_outputs ---- -### ::: ultralytics.yolo.engine.exporter.gd_outputs -

        - -## try_export ---- -### ::: ultralytics.yolo.engine.exporter.try_export -

        - -## export ---- -### ::: ultralytics.yolo.engine.exporter.export -

        diff --git a/spaces/vialibre/edia_full_es/app.py b/spaces/vialibre/edia_full_es/app.py deleted file mode 100644 index 542b3fd139547a40a85700928fcc9762aac5e01c..0000000000000000000000000000000000000000 --- a/spaces/vialibre/edia_full_es/app.py +++ /dev/null @@ -1,99 +0,0 @@ -# --- Imports libs --- -import gradio as gr -import pandas as pd -import configparser - - -# --- Imports modules --- -from modules.model_embbeding import Embedding -from modules.module_vocabulary import Vocabulary -from modules.module_languageModel import LanguageModel - - -# --- Imports interfaces --- -from interfaces.interface_WordExplorer import interface as interface_wordExplorer -from interfaces.interface_BiasWordExplorer import interface as interface_biasWordExplorer -from interfaces.interface_data import interface as interface_data -from interfaces.interface_biasPhrase import interface as interface_biasPhrase -from interfaces.interface_crowsPairs import interface as interface_crowsPairs - - -# --- Tool config --- -cfg = configparser.ConfigParser() -cfg.read('tool.cfg') - -LANGUAGE = cfg['INTERFACE']['language'] -EMBEDDINGS_PATH = cfg['WORD_EXPLORER']['embeddings_path'] -NN_METHOD = cfg['WORD_EXPLORER']['nn_method'] -MAX_NEIGHBORS = int(cfg['WORD_EXPLORER']['max_neighbors']) -CONTEXTS_DATASET = cfg['DATA']['contexts_dataset'] -VOCABULARY_SUBSET = cfg['DATA']['vocabulary_subset'] -AVAILABLE_WORDCLOUD = cfg['DATA'].getboolean('available_wordcloud') -LANGUAGE_MODEL = cfg['LMODEL']['language_model'] -AVAILABLE_LOGS = cfg['LOGS'].getboolean('available_logs') - - -# --- Init classes --- -embedding = Embedding( - path=EMBEDDINGS_PATH, - limit=100000, - randomizedPCA=False, - max_neighbors=MAX_NEIGHBORS, - nn_method=NN_METHOD -) -vocabulary = Vocabulary( - subset_name=VOCABULARY_SUBSET -) -beto_lm = LanguageModel( - model_name=LANGUAGE_MODEL -) -labels = pd.read_json(f"language/{LANGUAGE}.json")["app"] - - -# --- Main App --- -INTERFACE_LIST = [ - interface_biasWordExplorer( - embedding=embedding, - available_logs=AVAILABLE_LOGS, - lang=LANGUAGE), - interface_wordExplorer( - embedding=embedding, - available_logs=AVAILABLE_LOGS, - max_neighbors=MAX_NEIGHBORS, - lang=LANGUAGE), - interface_data( - vocabulary=vocabulary, - contexts=CONTEXTS_DATASET, - available_logs=AVAILABLE_LOGS, - available_wordcloud=AVAILABLE_WORDCLOUD, - lang=LANGUAGE), - interface_biasPhrase( - language_model=beto_lm, - available_logs=AVAILABLE_LOGS, - lang=LANGUAGE), - interface_crowsPairs( - language_model=beto_lm, - available_logs=AVAILABLE_LOGS, - lang=LANGUAGE), -] - -TAB_NAMES = [ - labels["biasWordExplorer"], - labels["wordExplorer"], - labels["dataExplorer"], - labels["phraseExplorer"], - labels["crowsPairsExplorer"] -] - -if LANGUAGE != 'es': - # Skip data tab when using other than spanish language - INTERFACE_LIST = INTERFACE_LIST[:2] + INTERFACE_LIST[3:] - TAB_NAMES = TAB_NAMES[:2] + TAB_NAMES[3:] - -iface = gr.TabbedInterface( - interface_list= INTERFACE_LIST, - tab_names=TAB_NAMES -) - -iface.queue(concurrency_count=8) -iface.launch(debug=False) diff --git a/spaces/video-p2p-library/Video-P2P-Demo/trainer.py b/spaces/video-p2p-library/Video-P2P-Demo/trainer.py deleted file mode 100644 index fafedbd859aa17198175b4198928d392d690b948..0000000000000000000000000000000000000000 --- a/spaces/video-p2p-library/Video-P2P-Demo/trainer.py +++ /dev/null @@ -1,322 +0,0 @@ -from __future__ import annotations - -import datetime -import os -import pathlib -import shlex -import shutil -import subprocess -import sys - -import gradio as gr -import slugify -import torch -import huggingface_hub -from huggingface_hub import HfApi -from omegaconf import OmegaConf - -from app_upload import ModelUploader -from utils import save_model_card - -sys.path.append('Tune-A-Video') -sys.path.append('Video-P2P') - -URL_TO_JOIN_MODEL_LIBRARY_ORG = 'https://huggingface.co/organizations/video-p2p-library/share/pZwQaStCpdmMCGLURsMhMkEpvIlsdMdnkk' -ORIGINAL_SPACE_ID = 'video-p2p-library/Video-P2P-Demo' -SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID) - - -class Trainer: - def __init__(self, hf_token: str | None = None): - self.hf_token = hf_token - self.model_uploader = ModelUploader(hf_token) - - self.checkpoint_dir = pathlib.Path('checkpoints') - self.checkpoint_dir.mkdir(exist_ok=True) - - def download_base_model(self, base_model_id: str, token=None) -> str: - model_dir = self.checkpoint_dir / base_model_id - if not model_dir.exists(): - org_name = base_model_id.split('/')[0] - org_dir = self.checkpoint_dir / org_name - org_dir.mkdir(exist_ok=True) - print(f'https://huggingface.co/{base_model_id}') - if token == None: - subprocess.run(shlex.split( - f'git clone https://huggingface.co/{base_model_id}'), - cwd=org_dir) - return model_dir.as_posix() - else: - temp_path = huggingface_hub.snapshot_download(base_model_id, use_auth_token=token) - print(temp_path, org_dir) - # subprocess.run(shlex.split(f'mv {temp_path} {model_dir.as_posix()}')) - # return model_dir.as_posix() - return temp_path - - def join_model_library_org(self, token: str) -> None: - subprocess.run( - shlex.split( - f'curl -X POST -H "Authorization: Bearer {token}" -H "Content-Type: application/json" {URL_TO_JOIN_MODEL_LIBRARY_ORG}' - )) - - def run( - self, - training_video: str, - training_prompt: str, - output_model_name: str, - overwrite_existing_model: bool, - validation_prompt: str, - base_model: str, - resolution_s: str, - n_steps: int, - learning_rate: float, - gradient_accumulation: int, - seed: int, - fp16: bool, - use_8bit_adam: bool, - checkpointing_steps: int, - validation_epochs: int, - upload_to_hub: bool, - use_private_repo: bool, - delete_existing_repo: bool, - upload_to: str, - remove_gpu_after_training: bool, - input_token: str, - blend_word_1: str, - blend_word_2: str, - eq_params_1: str, - eq_params_2: str, - ) -> str: - # if SPACE_ID == ORIGINAL_SPACE_ID: - # raise gr.Error( - # 'This Space does not work on this Shared UI. Duplicate the Space and attribute a GPU' - # ) - if not torch.cuda.is_available(): - raise gr.Error('CUDA is not available.') - if training_video is None: - raise gr.Error('You need to upload a video.') - if not training_prompt: - raise gr.Error('The training prompt is missing.') - if not validation_prompt: - raise gr.Error('The validation prompt is missing.') - - resolution = int(resolution_s) - - if not output_model_name: - timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S') - output_model_name = f'video-p2p-{timestamp}' - output_model_name = slugify.slugify(output_model_name) - - repo_dir = pathlib.Path(__file__).parent - output_dir = repo_dir / 'experiments' / output_model_name - if overwrite_existing_model or upload_to_hub: - shutil.rmtree(output_dir, ignore_errors=True) - output_dir.mkdir(parents=True) - - if upload_to_hub: - self.join_model_library_org( - self.hf_token if self.hf_token else input_token) - - config = OmegaConf.load('Video-P2P/configs/man-skiing.yaml') - config.pretrained_model_path = self.download_base_model(base_model) - config.output_dir = output_dir.as_posix() - config.train_data.video_path = training_video.name # type: ignore - config.train_data.prompt = training_prompt - config.train_data.n_sample_frames = 8 - config.train_data.width = resolution - config.train_data.height = resolution - config.train_data.sample_start_idx = 0 - config.train_data.sample_frame_rate = 1 - config.validation_data.prompts = [validation_prompt] - config.validation_data.video_length = 8 - config.validation_data.width = resolution - config.validation_data.height = resolution - config.validation_data.num_inference_steps = 50 - config.validation_data.guidance_scale = 7.5 - config.learning_rate = learning_rate - config.gradient_accumulation_steps = gradient_accumulation - config.train_batch_size = 1 - config.max_train_steps = n_steps - config.checkpointing_steps = checkpointing_steps - config.validation_steps = validation_epochs - config.seed = seed - config.mixed_precision = 'fp16' if fp16 else '' - config.use_8bit_adam = use_8bit_adam - config.prompts = [training_prompt, validation_prompt] - config.blend_word = [blend_word_1, blend_word_2] - config.eq_params = {"words":[eq_params_1], "values":[int(eq_params_2)]} - if len(validation_prompt) == len(training_prompt): - config.is_word_swap = True - else: - config.is_word_swap = False - - config_path = output_dir / 'config.yaml' - with open(config_path, 'w') as f: - OmegaConf.save(config, f) - - command = f'accelerate launch Video-P2P/run_tuning.py --config {config_path}' - subprocess.run(shlex.split(command)) - # command = f'python Video-P2P/run_videop2p.py --config {config_path}' - # subprocess.run(shlex.split(command)) - save_model_card(save_dir=output_dir, - base_model=base_model, - training_prompt=training_prompt, - test_prompt=validation_prompt, - test_image_dir='results') - - message = 'Training completed!' - print(message) - - if upload_to_hub: - upload_message = self.model_uploader.upload_model( - folder_path=output_dir.as_posix(), - repo_name=output_model_name, - upload_to=upload_to, - private=use_private_repo, - delete_existing_repo=delete_existing_repo, - input_token=input_token) - print(upload_message) - message = message + '\n' + upload_message - - if remove_gpu_after_training: - space_id = os.getenv('SPACE_ID') - if space_id: - api = HfApi( - token=self.hf_token if self.hf_token else input_token) - api.request_space_hardware(repo_id=space_id, - hardware='cpu-basic') - - return message - - - def run_p2p( - self, - training_video: str, - training_prompt: str, - output_model_name: str, - overwrite_existing_model: bool, - validation_prompt: str, - base_model: str, - resolution_s: str, - n_steps: int, - learning_rate: float, - gradient_accumulation: int, - seed: int, - fp16: bool, - use_8bit_adam: bool, - checkpointing_steps: int, - validation_epochs: int, - upload_to_hub: bool, - use_private_repo: bool, - delete_existing_repo: bool, - upload_to: str, - remove_gpu_after_training: bool, - input_token: str, - blend_word_1: str, - blend_word_2: str, - eq_params_1: str, - eq_params_2: str, - tuned_model: str, - cross_replace: float, - ) -> str: - # if SPACE_ID == ORIGINAL_SPACE_ID: - # raise gr.Error( - # 'This Space does not work on this Shared UI. Duplicate the Space and attribute a GPU' - # ) - if not torch.cuda.is_available(): - raise gr.Error('CUDA is not available.') - if training_video is None: - raise gr.Error('You need to upload a video.') - if not training_prompt: - raise gr.Error('The training prompt is missing.') - if not validation_prompt: - raise gr.Error('The validation prompt is missing.') - - resolution = int(resolution_s) - - if not output_model_name: - timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S') - output_model_name = f'video-p2p-{timestamp}' - output_model_name = slugify.slugify(output_model_name) - - repo_dir = pathlib.Path(__file__).parent - output_dir = repo_dir / 'experiments' / output_model_name - if overwrite_existing_model or upload_to_hub: - shutil.rmtree(output_dir, ignore_errors=True) - output_dir.mkdir(parents=True) - - if upload_to_hub: - self.join_model_library_org( - self.hf_token if self.hf_token else input_token) - - config = OmegaConf.load('Video-P2P/configs/man-skiing.yaml') - config.pretrained_model_path = self.download_base_model(tuned_model, token=input_token) - config.output_dir = output_dir.as_posix() - config.train_data.video_path = training_video.name # type: ignore - config.train_data.prompt = training_prompt - config.train_data.n_sample_frames = 8 - config.train_data.width = resolution - config.train_data.height = resolution - config.train_data.sample_start_idx = 0 - config.train_data.sample_frame_rate = 1 - config.validation_data.prompts = [validation_prompt] - config.validation_data.video_length = 8 - config.validation_data.width = resolution - config.validation_data.height = resolution - config.validation_data.num_inference_steps = 50 - config.validation_data.guidance_scale = 7.5 - config.learning_rate = learning_rate - config.gradient_accumulation_steps = gradient_accumulation - config.train_batch_size = 1 - config.max_train_steps = n_steps - config.checkpointing_steps = checkpointing_steps - config.validation_steps = validation_epochs - config.seed = seed - config.mixed_precision = 'fp16' if fp16 else '' - config.use_8bit_adam = use_8bit_adam - config.prompts = [training_prompt, validation_prompt] - config.blend_word = [blend_word_1, blend_word_2] - config.eq_params = {"words":[eq_params_1], "values":[int(eq_params_2)]} - if len(validation_prompt) == len(training_prompt): - config.is_word_swap = True - else: - config.is_word_swap = False - config.cross_replace_steps = cross_replace - - config_path = output_dir / 'config.yaml' - with open(config_path, 'w') as f: - OmegaConf.save(config, f) - - # command = f'accelerate launch Video-P2P/run_tuning.py --config {config_path}' - # subprocess.run(shlex.split(command)) - command = f'python Video-P2P/run_videop2p.py --config {config_path}' - subprocess.run(shlex.split(command)) - save_model_card(save_dir=output_dir, - base_model=base_model, - training_prompt=training_prompt, - test_prompt=validation_prompt, - test_image_dir='results') - - message = 'Training completed!' - print(message) - - if upload_to_hub: - upload_message = self.model_uploader.upload_model( - folder_path=output_dir.as_posix(), - repo_name=output_model_name, - upload_to=upload_to, - private=use_private_repo, - delete_existing_repo=delete_existing_repo, - input_token=input_token) - print(upload_message) - message = message + '\n' + upload_message - - if remove_gpu_after_training: - space_id = os.getenv('SPACE_ID') - if space_id: - api = HfApi( - token=self.hf_token if self.hf_token else input_token) - api.request_space_hardware(repo_id=space_id, - hardware='cpu-basic') - - return message diff --git a/spaces/vijv/VV-04-GR-Seq-2-Seq-QA-Auto-Gen/qasrl_model_pipeline.py b/spaces/vijv/VV-04-GR-Seq-2-Seq-QA-Auto-Gen/qasrl_model_pipeline.py deleted file mode 100644 index 50135f76849bc8537fcae83b72532da661487da6..0000000000000000000000000000000000000000 --- a/spaces/vijv/VV-04-GR-Seq-2-Seq-QA-Auto-Gen/qasrl_model_pipeline.py +++ /dev/null @@ -1,183 +0,0 @@ -from typing import Optional -import json -from argparse import Namespace -from pathlib import Path -from transformers import Text2TextGenerationPipeline, AutoModelForSeq2SeqLM, AutoTokenizer - -def get_markers_for_model(is_t5_model: bool) -> Namespace: - special_tokens_constants = Namespace() - if is_t5_model: - # T5 model have 100 special tokens by default - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - - else: - special_tokens_constants.separator_input_question_predicate = "" - special_tokens_constants.separator_output_answers = "" - special_tokens_constants.separator_output_questions = "" # if using only questions - special_tokens_constants.separator_output_question_answer = "" - special_tokens_constants.separator_output_pairs = "" - special_tokens_constants.predicate_generic_marker = "" - special_tokens_constants.predicate_verb_marker = "" - special_tokens_constants.predicate_nominalization_marker = "" - return special_tokens_constants - -def load_trained_model(name_or_path): - import huggingface_hub as HFhub - tokenizer = AutoTokenizer.from_pretrained(name_or_path) - model = AutoModelForSeq2SeqLM.from_pretrained(name_or_path) - # load preprocessing_kwargs from the model repo on HF hub, or from the local model directory - kwargs_filename = None - if name_or_path.startswith("kleinay/"): # and 'preprocessing_kwargs.json' in HFhub.list_repo_files(name_or_path): # the supported version of HFhub doesn't support list_repo_files - kwargs_filename = HFhub.hf_hub_download(repo_id=name_or_path, filename="preprocessing_kwargs.json") - elif Path(name_or_path).is_dir() and (Path(name_or_path) / "experiment_kwargs.json").exists(): - kwargs_filename = Path(name_or_path) / "experiment_kwargs.json" - - if kwargs_filename: - preprocessing_kwargs = json.load(open(kwargs_filename)) - # integrate into model.config (for decoding args, e.g. "num_beams"), and save also as standalone object for preprocessing - model.config.preprocessing_kwargs = Namespace(**preprocessing_kwargs) - model.config.update(preprocessing_kwargs) - return model, tokenizer - - -class QASRL_Pipeline(Text2TextGenerationPipeline): - def __init__(self, model_repo: str, **kwargs): - model, tokenizer = load_trained_model(model_repo) - super().__init__(model, tokenizer, framework="pt") - self.is_t5_model = "t5" in model.config.model_type - self.special_tokens = get_markers_for_model(self.is_t5_model) - self.data_args = model.config.preprocessing_kwargs - # backward compatibility - default keyword values implemeted in `run_summarization`, thus not saved in `preprocessing_kwargs` - if "predicate_marker_type" not in vars(self.data_args): - self.data_args.predicate_marker_type = "generic" - if "use_bilateral_predicate_marker" not in vars(self.data_args): - self.data_args.use_bilateral_predicate_marker = True - if "append_verb_form" not in vars(self.data_args): - self.data_args.append_verb_form = True - self._update_config(**kwargs) - - def _update_config(self, **kwargs): - " Update self.model.config with initialization parameters and necessary defaults. " - # set default values that will always override model.config, but can overriden by __init__ kwargs - kwargs["max_length"] = kwargs.get("max_length", 80) - # override model.config with kwargs - for k,v in kwargs.items(): - self.model.config.__dict__[k] = v - - def _sanitize_parameters(self, **kwargs): - preprocess_kwargs, forward_kwargs, postprocess_kwargs = {}, {}, {} - if "predicate_marker" in kwargs: - preprocess_kwargs["predicate_marker"] = kwargs["predicate_marker"] - if "predicate_type" in kwargs: - preprocess_kwargs["predicate_type"] = kwargs["predicate_type"] - if "verb_form" in kwargs: - preprocess_kwargs["verb_form"] = kwargs["verb_form"] - return preprocess_kwargs, forward_kwargs, postprocess_kwargs - - def preprocess(self, inputs, predicate_marker="", predicate_type=None, verb_form=None): - # Here, inputs is string or list of strings; apply string postprocessing - if isinstance(inputs, str): - processed_inputs = self._preprocess_string(inputs, predicate_marker, predicate_type, verb_form) - elif hasattr(inputs, "__iter__"): - processed_inputs = [self._preprocess_string(s, predicate_marker, predicate_type, verb_form) for s in inputs] - else: - raise ValueError("inputs must be str or Iterable[str]") - # Now pass to super.preprocess for tokenization - return super().preprocess(processed_inputs) - - def _preprocess_string(self, seq: str, predicate_marker: str, predicate_type: Optional[str], verb_form: Optional[str]) -> str: - sent_tokens = seq.split(" ") - assert predicate_marker in sent_tokens, f"Input sentence must include a predicate-marker token ('{predicate_marker}') before the target predicate word" - predicate_idx = sent_tokens.index(predicate_marker) - sent_tokens.remove(predicate_marker) - sentence_before_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx)]) - predicate = sent_tokens[predicate_idx] - sentence_after_predicate = " ".join([sent_tokens[i] for i in range(predicate_idx+1, len(sent_tokens))]) - - if self.data_args.predicate_marker_type == "generic": - predicate_marker = self.special_tokens.predicate_generic_marker - # In case we want special marker for each predicate type: """ - elif self.data_args.predicate_marker_type == "pred_type": - assert predicate_type is not None, "For this model, you must provide the `predicate_type` either when initializing QASRL_Pipeline(...) or when applying __call__(...) on it" - assert predicate_type in ("verbal", "nominal"), f"`predicate_type` must be either 'verbal' or 'nominal'; got '{predicate_type}'" - predicate_marker = {"verbal": self.special_tokens.predicate_verb_marker , - "nominal": self.special_tokens.predicate_nominalization_marker - }[predicate_type] - - if self.data_args.use_bilateral_predicate_marker: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {predicate_marker} {sentence_after_predicate}" - else: - seq = f"{sentence_before_predicate} {predicate_marker} {predicate} {sentence_after_predicate}" - - # embed also verb_form - if self.data_args.append_verb_form and verb_form is None: - raise ValueError(f"For this model, you must provide the `verb_form` of the predicate when applying __call__(...)") - elif self.data_args.append_verb_form: - seq = f"{seq} {self.special_tokens.separator_input_question_predicate} {verb_form} " - else: - seq = f"{seq} " - - # append source prefix (for t5 models) - prefix = self._get_source_prefix(predicate_type) - - return prefix + seq - - def _get_source_prefix(self, predicate_type: Optional[str]): - if not self.is_t5_model or self.data_args.source_prefix is None: - return '' - if not self.data_args.source_prefix.startswith("<"): # Regular prefix - not dependent on input row x - return self.data_args.source_prefix - if self.data_args.source_prefix == "": - if predicate_type is None: - raise ValueError("source_prefix is '' but input no `predicate_type`.") - else: - return f"Generate QAs for {predicate_type} QASRL: " - - def _forward(self, *args, **kwargs): - outputs = super()._forward(*args, **kwargs) - return outputs - - - def postprocess(self, model_outputs): - output_seq = self.tokenizer.decode( - model_outputs["output_ids"].squeeze(), - skip_special_tokens=False, - clean_up_tokenization_spaces=False, - ) - output_seq = output_seq.strip(self.tokenizer.pad_token).strip(self.tokenizer.eos_token).strip() - qa_subseqs = output_seq.split(self.special_tokens.separator_output_pairs) - qas = [self._postrocess_qa(qa_subseq) for qa_subseq in qa_subseqs] - return {"generated_text": output_seq, - "QAs": qas} - - def _postrocess_qa(self, seq: str) -> str: - # split question and answers - if self.special_tokens.separator_output_question_answer in seq: - question, answer = seq.split(self.special_tokens.separator_output_question_answer)[:2] - else: - print("invalid format: no separator between question and answer found...") - return None - # question, answer = seq, '' # Or: backoff to only question - # skip "_" slots in questions - question = ' '.join(t for t in question.split(' ') if t != '_') - answers = [a.strip() for a in answer.split(self.special_tokens.separator_output_answers)] - return {"question": question, "answers": answers} - - -if __name__ == "__main__": - pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-baseline") - res1 = pipe("The student was interested in Luke 's research about sea animals .", verb_form="research", predicate_type="nominal") - res2 = pipe(["The doctor was interested in Luke 's treatment .", - "The Veterinary student was interested in Luke 's treatment of sea animals ."], verb_form="treat", predicate_type="nominal", num_beams=10) - res3 = pipe("A number of professions have developed that specialize in the treatment of mental disorders .", verb_form="develop", predicate_type="verbal") - print(res1) - print(res2) - print(res3) - \ No newline at end of file diff --git a/spaces/vinthony/SadTalker/src/face3d/options/base_options.py b/spaces/vinthony/SadTalker/src/face3d/options/base_options.py deleted file mode 100644 index d8f921d5a43434ae802a55a0fa3889c4b7ab9f6d..0000000000000000000000000000000000000000 --- a/spaces/vinthony/SadTalker/src/face3d/options/base_options.py +++ /dev/null @@ -1,169 +0,0 @@ -"""This script contains base options for Deep3DFaceRecon_pytorch -""" - -import argparse -import os -from util import util -import numpy as np -import torch -import face3d.models as models -import face3d.data as data - - -class BaseOptions(): - """This class defines options used during both training and test time. - - It also implements several helper functions such as parsing, printing, and saving the options. - It also gathers additional options defined in functions in both dataset class and model class. - """ - - def __init__(self, cmd_line=None): - """Reset the class; indicates the class hasn't been initailized""" - self.initialized = False - self.cmd_line = None - if cmd_line is not None: - self.cmd_line = cmd_line.split() - - def initialize(self, parser): - """Define the common options that are used in both training and test.""" - # basic parameters - parser.add_argument('--name', type=str, default='face_recon', help='name of the experiment. It decides where to store samples and models') - parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0 0,1,2, 0,2. use -1 for CPU') - parser.add_argument('--checkpoints_dir', type=str, default='./checkpoints', help='models are saved here') - parser.add_argument('--vis_batch_nums', type=float, default=1, help='batch nums of images for visulization') - parser.add_argument('--eval_batch_nums', type=float, default=float('inf'), help='batch nums of images for evaluation') - parser.add_argument('--use_ddp', type=util.str2bool, nargs='?', const=True, default=True, help='whether use distributed data parallel') - parser.add_argument('--ddp_port', type=str, default='12355', help='ddp port') - parser.add_argument('--display_per_batch', type=util.str2bool, nargs='?', const=True, default=True, help='whether use batch to show losses') - parser.add_argument('--add_image', type=util.str2bool, nargs='?', const=True, default=True, help='whether add image to tensorboard') - parser.add_argument('--world_size', type=int, default=1, help='batch nums of images for evaluation') - - # model parameters - parser.add_argument('--model', type=str, default='facerecon', help='chooses which model to use.') - - # additional parameters - parser.add_argument('--epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model') - parser.add_argument('--verbose', action='store_true', help='if specified, print more debugging information') - parser.add_argument('--suffix', default='', type=str, help='customized suffix: opt.name = opt.name + suffix: e.g., {model}_{netG}_size{load_size}') - - self.initialized = True - return parser - - def gather_options(self): - """Initialize our parser with basic options(only once). - Add additional model-specific and dataset-specific options. - These options are defined in the function - in model and dataset classes. - """ - if not self.initialized: # check if it has been initialized - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser = self.initialize(parser) - - # get the basic options - if self.cmd_line is None: - opt, _ = parser.parse_known_args() - else: - opt, _ = parser.parse_known_args(self.cmd_line) - - # set cuda visible devices - os.environ['CUDA_VISIBLE_DEVICES'] = opt.gpu_ids - - # modify model-related parser options - model_name = opt.model - model_option_setter = models.get_option_setter(model_name) - parser = model_option_setter(parser, self.isTrain) - if self.cmd_line is None: - opt, _ = parser.parse_known_args() # parse again with new defaults - else: - opt, _ = parser.parse_known_args(self.cmd_line) # parse again with new defaults - - # modify dataset-related parser options - if opt.dataset_mode: - dataset_name = opt.dataset_mode - dataset_option_setter = data.get_option_setter(dataset_name) - parser = dataset_option_setter(parser, self.isTrain) - - # save and return the parser - self.parser = parser - if self.cmd_line is None: - return parser.parse_args() - else: - return parser.parse_args(self.cmd_line) - - def print_options(self, opt): - """Print and save options - - It will print both current options and default values(if different). - It will save options into a text file / [checkpoints_dir] / opt.txt - """ - message = '' - message += '----------------- Options ---------------\n' - for k, v in sorted(vars(opt).items()): - comment = '' - default = self.parser.get_default(k) - if v != default: - comment = '\t[default: %s]' % str(default) - message += '{:>25}: {:<30}{}\n'.format(str(k), str(v), comment) - message += '----------------- End -------------------' - print(message) - - # save to the disk - expr_dir = os.path.join(opt.checkpoints_dir, opt.name) - util.mkdirs(expr_dir) - file_name = os.path.join(expr_dir, '{}_opt.txt'.format(opt.phase)) - try: - with open(file_name, 'wt') as opt_file: - opt_file.write(message) - opt_file.write('\n') - except PermissionError as error: - print("permission error {}".format(error)) - pass - - def parse(self): - """Parse our options, create checkpoints directory suffix, and set up gpu device.""" - opt = self.gather_options() - opt.isTrain = self.isTrain # train or test - - # process opt.suffix - if opt.suffix: - suffix = ('_' + opt.suffix.format(**vars(opt))) if opt.suffix != '' else '' - opt.name = opt.name + suffix - - - # set gpu ids - str_ids = opt.gpu_ids.split(',') - gpu_ids = [] - for str_id in str_ids: - id = int(str_id) - if id >= 0: - gpu_ids.append(id) - opt.world_size = len(gpu_ids) - # if len(opt.gpu_ids) > 0: - # torch.cuda.set_device(gpu_ids[0]) - if opt.world_size == 1: - opt.use_ddp = False - - if opt.phase != 'test': - # set continue_train automatically - if opt.pretrained_name is None: - model_dir = os.path.join(opt.checkpoints_dir, opt.name) - else: - model_dir = os.path.join(opt.checkpoints_dir, opt.pretrained_name) - if os.path.isdir(model_dir): - model_pths = [i for i in os.listdir(model_dir) if i.endswith('pth')] - if os.path.isdir(model_dir) and len(model_pths) != 0: - opt.continue_train= True - - # update the latest epoch count - if opt.continue_train: - if opt.epoch == 'latest': - epoch_counts = [int(i.split('.')[0].split('_')[-1]) for i in model_pths if 'latest' not in i] - if len(epoch_counts) != 0: - opt.epoch_count = max(epoch_counts) + 1 - else: - opt.epoch_count = int(opt.epoch) + 1 - - - self.print_options(opt) - self.opt = opt - return self.opt diff --git a/spaces/vonbarnekowa/stable-diffusion/share_btn.py b/spaces/vonbarnekowa/stable-diffusion/share_btn.py deleted file mode 100644 index 4c9aa8a91b1d0f86746fb118c19b03df86d424a3..0000000000000000000000000000000000000000 --- a/spaces/vonbarnekowa/stable-diffusion/share_btn.py +++ /dev/null @@ -1,60 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - const gradioEl = document.querySelector('body > gradio-app'); - const imgEls = gradioEl.querySelectorAll('#gallery img'); - const promptTxt = gradioEl.querySelector('#prompt-text-input input').value; - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!imgEls.length){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const files = await Promise.all( - [...imgEls].map(async (imgEl) => { - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `diffuse-the-rest-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - }) - ); - const urls = await Promise.all(files.map((f) => uploadFile(f))); - const htmlImgs = urls.map(url => ``); - const descriptionMd = `
        -${htmlImgs.join(`\n`)} -
        `; - const params = new URLSearchParams({ - title: promptTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/vroy02243/ML/aap.py b/spaces/vroy02243/ML/aap.py deleted file mode 100644 index a699bc5b3c2e987102ca93e0ee28d601e0a93d02..0000000000000000000000000000000000000000 --- a/spaces/vroy02243/ML/aap.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!!" - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/ops/furthest_point_sample.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/ops/furthest_point_sample.py deleted file mode 100644 index 374b7a878f1972c183941af28ba1df216ac1a60f..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/ops/furthest_point_sample.py +++ /dev/null @@ -1,83 +0,0 @@ -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'furthest_point_sampling_forward', - 'furthest_point_sampling_with_dist_forward' -]) - - -class FurthestPointSampling(Function): - """Uses iterative furthest point sampling to select a set of features whose - corresponding points have the furthest distance.""" - - @staticmethod - def forward(ctx, points_xyz: torch.Tensor, - num_points: int) -> torch.Tensor: - """ - Args: - points_xyz (Tensor): (B, N, 3) where N > num_points. - num_points (int): Number of points in the sampled set. - - Returns: - Tensor: (B, num_points) indices of the sampled points. - """ - assert points_xyz.is_contiguous() - - B, N = points_xyz.size()[:2] - output = torch.cuda.IntTensor(B, num_points) - temp = torch.cuda.FloatTensor(B, N).fill_(1e10) - - ext_module.furthest_point_sampling_forward( - points_xyz, - temp, - output, - b=B, - n=N, - m=num_points, - ) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(output) - return output - - @staticmethod - def backward(xyz, a=None): - return None, None - - -class FurthestPointSamplingWithDist(Function): - """Uses iterative furthest point sampling to select a set of features whose - corresponding points have the furthest distance.""" - - @staticmethod - def forward(ctx, points_dist: torch.Tensor, - num_points: int) -> torch.Tensor: - """ - Args: - points_dist (Tensor): (B, N, N) Distance between each point pair. - num_points (int): Number of points in the sampled set. - - Returns: - Tensor: (B, num_points) indices of the sampled points. - """ - assert points_dist.is_contiguous() - - B, N, _ = points_dist.size() - output = points_dist.new_zeros([B, num_points], dtype=torch.int32) - temp = points_dist.new_zeros([B, N]).fill_(1e10) - - ext_module.furthest_point_sampling_with_dist_forward( - points_dist, temp, output, b=B, n=N, m=num_points) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(output) - return output - - @staticmethod - def backward(xyz, a=None): - return None, None - - -furthest_point_sample = FurthestPointSampling.apply -furthest_point_sample_with_dist = FurthestPointSamplingWithDist.apply diff --git a/spaces/wanghaha13/ChuanhuChatGPT/llama_func.py b/spaces/wanghaha13/ChuanhuChatGPT/llama_func.py deleted file mode 100644 index c71027dd4e6f99c0c12626cbbf276f407877be04..0000000000000000000000000000000000000000 --- a/spaces/wanghaha13/ChuanhuChatGPT/llama_func.py +++ /dev/null @@ -1,192 +0,0 @@ -import os -import logging - -from llama_index import GPTSimpleVectorIndex -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -from langchain.llms import OpenAI -import colorama - - -from presets import * -from utils import * - - -def get_documents(file_src): - documents = [] - index_name = "" - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - logging.debug(f"file: {file.name}") - index_name += file.name - if os.path.splitext(file.name)[1] == ".pdf": - logging.debug("Loading PDF...") - CJKPDFReader = download_loader("CJKPDFReader") - loader = CJKPDFReader() - documents += loader.load_data(file=file.name) - elif os.path.splitext(file.name)[1] == ".docx": - logging.debug("Loading DOCX...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - documents += loader.load_data(file=file.name) - elif os.path.splitext(file.name)[1] == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - documents += loader.load_data(file=file.name) - else: - logging.debug("Loading text file...") - with open(file.name, "r", encoding="utf-8") as f: - text = add_space(f.read()) - documents += [Document(text)] - index_name = sha1sum(index_name) - return documents, index_name - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=1, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" ", - num_children=10, - max_keywords_per_chunk=10, -): - os.environ["OPENAI_API_KEY"] = api_key - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - llm_predictor = LLMPredictor( - llm=OpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key) - ) - prompt_helper = PromptHelper( - max_input_size, - num_outputs, - max_chunk_overlap, - embedding_limit, - chunk_size_limit, - separator=separator, - ) - documents, index_name = get_documents(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - logging.debug("构建索引中……") - index = GPTSimpleVectorIndex( - documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper - ) - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - return index - except Exception as e: - print(e) - return None - - -def chat_ai( - api_key, - index, - question, - context, - chatbot, -): - os.environ["OPENAI_API_KEY"] = api_key - - logging.info(f"Question: {question}") - - response, chatbot_display, status_text = ask_ai( - api_key, - index, - question, - replace_today(PROMPT_TEMPLATE), - REFINE_TEMPLATE, - SIM_K, - INDEX_QUERY_TEMPRATURE, - context, - ) - if response is None: - status_text = "查询失败,请换个问法试试" - return context, chatbot - response = response - - context.append({"role": "user", "content": question}) - context.append({"role": "assistant", "content": response}) - chatbot.append((question, chatbot_display)) - - os.environ["OPENAI_API_KEY"] = "" - return context, chatbot, status_text - - -def ask_ai( - api_key, - index, - question, - prompt_tmpl, - refine_tmpl, - sim_k=1, - temprature=0, - prefix_messages=[], -): - os.environ["OPENAI_API_KEY"] = api_key - - logging.debug("Index file found") - logging.debug("Querying index...") - llm_predictor = LLMPredictor( - llm=OpenAI( - temperature=temprature, - model_name="gpt-3.5-turbo-0301", - prefix_messages=prefix_messages, - ) - ) - - response = None # Initialize response variable to avoid UnboundLocalError - qa_prompt = QuestionAnswerPrompt(prompt_tmpl) - rf_prompt = RefinePrompt(refine_tmpl) - response = index.query( - question, - llm_predictor=llm_predictor, - similarity_top_k=sim_k, - text_qa_template=qa_prompt, - refine_template=rf_prompt, - response_mode="compact", - ) - - if response is not None: - logging.info(f"Response: {response}") - ret_text = response.response - nodes = [] - for index, node in enumerate(response.source_nodes): - brief = node.source_text[:25].replace("\n", "") - nodes.append( - f"
        [{index+1}]\t{brief}...

        {node.source_text}

        " - ) - new_response = ret_text + "\n----------\n" + "\n\n".join(nodes) - logging.info( - f"Response: {colorama.Fore.BLUE}{ret_text}{colorama.Style.RESET_ALL}" - ) - os.environ["OPENAI_API_KEY"] = "" - return ret_text, new_response, f"查询消耗了{llm_predictor.last_token_usage} tokens" - else: - logging.warning("No response found, returning None") - os.environ["OPENAI_API_KEY"] = "" - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/weibinke/vits-simple-api/vits/commons.py b/spaces/weibinke/vits-simple-api/vits/commons.py deleted file mode 100644 index bda0a67534ac34bd02dc28b845619b2433a40df6..0000000000000000000000000000000000000000 --- a/spaces/weibinke/vits-simple-api/vits/commons.py +++ /dev/null @@ -1,96 +0,0 @@ -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/test_llm.py b/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/test_llm.py deleted file mode 100644 index f61793151e62f143356e75249474b9dd60b50de7..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/test_llm.py +++ /dev/null @@ -1,37 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 14:45 -@Author : alexanderwu -@File : test_llm.py -@Modified By: mashenquan, 2023/8/20. Remove global configuration `CONFIG`, enable configuration support for business isolation. -""" - -import pytest - -from metagpt.config import Config -from metagpt.provider.openai_api import OpenAIGPTAPI as LLM, CostManager - - -@pytest.fixture() -def llm(): - options = Config().runtime_options - return LLM(options=options, cost_manager=CostManager(**options)) - - -@pytest.mark.asyncio -async def test_llm_aask(llm): - assert len(await llm.aask('hello world')) > 0 - - -@pytest.mark.asyncio -async def test_llm_aask_batch(llm): - assert len(await llm.aask_batch(['hi', 'write python hello world.'])) > 0 - - -@pytest.mark.asyncio -async def test_llm_acompletion(llm): - hello_msg = [{'role': 'user', 'content': 'hello'}] - assert len(await llm.acompletion(hello_msg)) > 0 - assert len(await llm.acompletion_batch([hello_msg])) > 0 - assert len(await llm.acompletion_batch_text([hello_msg])) > 0 diff --git a/spaces/wisnuarys15/rvc-wisnu5/infer_pack/modules.py b/spaces/wisnuarys15/rvc-wisnu5/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/wisnuarys15/rvc-wisnu5/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/wouaf/WOUAF-Text-to-Image/dnnlib/util.py b/spaces/wouaf/WOUAF-Text-to-Image/dnnlib/util.py deleted file mode 100644 index 76725336d01e75e1c68daa88be47f4fde0bbc63b..0000000000000000000000000000000000000000 --- a/spaces/wouaf/WOUAF-Text-to-Image/dnnlib/util.py +++ /dev/null @@ -1,477 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Miscellaneous utility classes and functions.""" - -import ctypes -import fnmatch -import importlib -import inspect -import numpy as np -import os -import shutil -import sys -import types -import io -import pickle -import re -import requests -import html -import hashlib -import glob -import tempfile -import urllib -import urllib.request -import uuid - -from distutils.util import strtobool -from typing import Any, List, Tuple, Union - - -# Util classes -# ------------------------------------------------------------------------------------------ - - -class EasyDict(dict): - """Convenience class that behaves like a dict but allows access with the attribute syntax.""" - - def __getattr__(self, name: str) -> Any: - try: - return self[name] - except KeyError: - raise AttributeError(name) - - def __setattr__(self, name: str, value: Any) -> None: - self[name] = value - - def __delattr__(self, name: str) -> None: - del self[name] - - -class Logger(object): - """Redirect stderr to stdout, optionally print stdout to a file, and optionally force flushing on both stdout and the file.""" - - def __init__(self, file_name: str = None, file_mode: str = "w", should_flush: bool = True): - self.file = None - - if file_name is not None: - self.file = open(file_name, file_mode) - - self.should_flush = should_flush - self.stdout = sys.stdout - self.stderr = sys.stderr - - sys.stdout = self - sys.stderr = self - - def __enter__(self) -> "Logger": - return self - - def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: - self.close() - - def write(self, text: Union[str, bytes]) -> None: - """Write text to stdout (and a file) and optionally flush.""" - if isinstance(text, bytes): - text = text.decode() - if len(text) == 0: # workaround for a bug in VSCode debugger: sys.stdout.write(''); sys.stdout.flush() => crash - return - - if self.file is not None: - self.file.write(text) - - self.stdout.write(text) - - if self.should_flush: - self.flush() - - def flush(self) -> None: - """Flush written text to both stdout and a file, if open.""" - if self.file is not None: - self.file.flush() - - self.stdout.flush() - - def close(self) -> None: - """Flush, close possible files, and remove stdout/stderr mirroring.""" - self.flush() - - # if using multiple loggers, prevent closing in wrong order - if sys.stdout is self: - sys.stdout = self.stdout - if sys.stderr is self: - sys.stderr = self.stderr - - if self.file is not None: - self.file.close() - self.file = None - - -# Cache directories -# ------------------------------------------------------------------------------------------ - -_dnnlib_cache_dir = None - -def set_cache_dir(path: str) -> None: - global _dnnlib_cache_dir - _dnnlib_cache_dir = path - -def make_cache_dir_path(*paths: str) -> str: - if _dnnlib_cache_dir is not None: - return os.path.join(_dnnlib_cache_dir, *paths) - if 'DNNLIB_CACHE_DIR' in os.environ: - return os.path.join(os.environ['DNNLIB_CACHE_DIR'], *paths) - if 'HOME' in os.environ: - return os.path.join(os.environ['HOME'], '.cache', 'dnnlib', *paths) - if 'USERPROFILE' in os.environ: - return os.path.join(os.environ['USERPROFILE'], '.cache', 'dnnlib', *paths) - return os.path.join(tempfile.gettempdir(), '.cache', 'dnnlib', *paths) - -# Small util functions -# ------------------------------------------------------------------------------------------ - - -def format_time(seconds: Union[int, float]) -> str: - """Convert the seconds to human readable string with days, hours, minutes and seconds.""" - s = int(np.rint(seconds)) - - if s < 60: - return "{0}s".format(s) - elif s < 60 * 60: - return "{0}m {1:02}s".format(s // 60, s % 60) - elif s < 24 * 60 * 60: - return "{0}h {1:02}m {2:02}s".format(s // (60 * 60), (s // 60) % 60, s % 60) - else: - return "{0}d {1:02}h {2:02}m".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24, (s // 60) % 60) - - -def ask_yes_no(question: str) -> bool: - """Ask the user the question until the user inputs a valid answer.""" - while True: - try: - print("{0} [y/n]".format(question)) - return strtobool(input().lower()) - except ValueError: - pass - - -def tuple_product(t: Tuple) -> Any: - """Calculate the product of the tuple elements.""" - result = 1 - - for v in t: - result *= v - - return result - - -_str_to_ctype = { - "uint8": ctypes.c_ubyte, - "uint16": ctypes.c_uint16, - "uint32": ctypes.c_uint32, - "uint64": ctypes.c_uint64, - "int8": ctypes.c_byte, - "int16": ctypes.c_int16, - "int32": ctypes.c_int32, - "int64": ctypes.c_int64, - "float32": ctypes.c_float, - "float64": ctypes.c_double -} - - -def get_dtype_and_ctype(type_obj: Any) -> Tuple[np.dtype, Any]: - """Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes.""" - type_str = None - - if isinstance(type_obj, str): - type_str = type_obj - elif hasattr(type_obj, "__name__"): - type_str = type_obj.__name__ - elif hasattr(type_obj, "name"): - type_str = type_obj.name - else: - raise RuntimeError("Cannot infer type name from input") - - assert type_str in _str_to_ctype.keys() - - my_dtype = np.dtype(type_str) - my_ctype = _str_to_ctype[type_str] - - assert my_dtype.itemsize == ctypes.sizeof(my_ctype) - - return my_dtype, my_ctype - - -def is_pickleable(obj: Any) -> bool: - try: - with io.BytesIO() as stream: - pickle.dump(obj, stream) - return True - except: - return False - - -# Functionality to import modules/objects by name, and call functions by name -# ------------------------------------------------------------------------------------------ - -def get_module_from_obj_name(obj_name: str) -> Tuple[types.ModuleType, str]: - """Searches for the underlying module behind the name to some python object. - Returns the module and the object name (original name with module part removed).""" - - # allow convenience shorthands, substitute them by full names - obj_name = re.sub("^np.", "numpy.", obj_name) - obj_name = re.sub("^tf.", "tensorflow.", obj_name) - - # list alternatives for (module_name, local_obj_name) - parts = obj_name.split(".") - name_pairs = [(".".join(parts[:i]), ".".join(parts[i:])) for i in range(len(parts), 0, -1)] - - # try each alternative in turn - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module(module_name) # may raise ImportError - get_obj_from_module(module, local_obj_name) # may raise AttributeError - return module, local_obj_name - except: - pass - - # maybe some of the modules themselves contain errors? - for module_name, _local_obj_name in name_pairs: - try: - importlib.import_module(module_name) # may raise ImportError - except ImportError: - if not str(sys.exc_info()[1]).startswith("No module named '" + module_name + "'"): - raise - - # maybe the requested attribute is missing? - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module(module_name) # may raise ImportError - get_obj_from_module(module, local_obj_name) # may raise AttributeError - except ImportError: - pass - - # we are out of luck, but we have no idea why - raise ImportError(obj_name) - - -def get_obj_from_module(module: types.ModuleType, obj_name: str) -> Any: - """Traverses the object name and returns the last (rightmost) python object.""" - if obj_name == '': - return module - obj = module - for part in obj_name.split("."): - obj = getattr(obj, part) - return obj - - -def get_obj_by_name(name: str) -> Any: - """Finds the python object with the given name.""" - module, obj_name = get_module_from_obj_name(name) - return get_obj_from_module(module, obj_name) - - -def call_func_by_name(*args, func_name: str = None, **kwargs) -> Any: - """Finds the python object with the given name and calls it as a function.""" - assert func_name is not None - func_obj = get_obj_by_name(func_name) - assert callable(func_obj) - return func_obj(*args, **kwargs) - - -def construct_class_by_name(*args, class_name: str = None, **kwargs) -> Any: - """Finds the python class with the given name and constructs it with the given arguments.""" - return call_func_by_name(*args, func_name=class_name, **kwargs) - - -def get_module_dir_by_obj_name(obj_name: str) -> str: - """Get the directory path of the module containing the given object name.""" - module, _ = get_module_from_obj_name(obj_name) - return os.path.dirname(inspect.getfile(module)) - - -def is_top_level_function(obj: Any) -> bool: - """Determine whether the given object is a top-level function, i.e., defined at module scope using 'def'.""" - return callable(obj) and obj.__name__ in sys.modules[obj.__module__].__dict__ - - -def get_top_level_function_name(obj: Any) -> str: - """Return the fully-qualified name of a top-level function.""" - assert is_top_level_function(obj) - module = obj.__module__ - if module == '__main__': - module = os.path.splitext(os.path.basename(sys.modules[module].__file__))[0] - return module + "." + obj.__name__ - - -# File system helpers -# ------------------------------------------------------------------------------------------ - -def list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) -> List[Tuple[str, str]]: - """List all files recursively in a given directory while ignoring given file and directory names. - Returns list of tuples containing both absolute and relative paths.""" - assert os.path.isdir(dir_path) - base_name = os.path.basename(os.path.normpath(dir_path)) - - if ignores is None: - ignores = [] - - result = [] - - for root, dirs, files in os.walk(dir_path, topdown=True): - for ignore_ in ignores: - dirs_to_remove = [d for d in dirs if fnmatch.fnmatch(d, ignore_)] - - # dirs need to be edited in-place - for d in dirs_to_remove: - dirs.remove(d) - - files = [f for f in files if not fnmatch.fnmatch(f, ignore_)] - - absolute_paths = [os.path.join(root, f) for f in files] - relative_paths = [os.path.relpath(p, dir_path) for p in absolute_paths] - - if add_base_to_relative: - relative_paths = [os.path.join(base_name, p) for p in relative_paths] - - assert len(absolute_paths) == len(relative_paths) - result += zip(absolute_paths, relative_paths) - - return result - - -def copy_files_and_create_dirs(files: List[Tuple[str, str]]) -> None: - """Takes in a list of tuples of (src, dst) paths and copies files. - Will create all necessary directories.""" - for file in files: - target_dir_name = os.path.dirname(file[1]) - - # will create all intermediate-level directories - if not os.path.exists(target_dir_name): - os.makedirs(target_dir_name) - - shutil.copyfile(file[0], file[1]) - - -# URL helpers -# ------------------------------------------------------------------------------------------ - -def is_url(obj: Any, allow_file_urls: bool = False) -> bool: - """Determine whether the given object is a valid URL string.""" - if not isinstance(obj, str) or not "://" in obj: - return False - if allow_file_urls and obj.startswith('file://'): - return True - try: - res = requests.compat.urlparse(obj) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - res = requests.compat.urlparse(requests.compat.urljoin(obj, "/")) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - except: - return False - return True - - -def open_url(url: str, cache_dir: str = None, num_attempts: int = 10, verbose: bool = True, return_filename: bool = False, cache: bool = True) -> Any: - """Download the given URL and return a binary-mode file object to access the data.""" - assert num_attempts >= 1 - assert not (return_filename and (not cache)) - - # Doesn't look like an URL scheme so interpret it as a local filename. - if not re.match('^[a-z]+://', url): - return url if return_filename else open(url, "rb") - - # Handle file URLs. This code handles unusual file:// patterns that - # arise on Windows: - # - # file:///c:/foo.txt - # - # which would translate to a local '/c:/foo.txt' filename that's - # invalid. Drop the forward slash for such pathnames. - # - # If you touch this code path, you should test it on both Linux and - # Windows. - # - # Some internet resources suggest using urllib.request.url2pathname() but - # but that converts forward slashes to backslashes and this causes - # its own set of problems. - if url.startswith('file://'): - filename = urllib.parse.urlparse(url).path - if re.match(r'^/[a-zA-Z]:', filename): - filename = filename[1:] - return filename if return_filename else open(filename, "rb") - - assert is_url(url) - - # Lookup from cache. - if cache_dir is None: - cache_dir = make_cache_dir_path('downloads') - - url_md5 = hashlib.md5(url.encode("utf-8")).hexdigest() - if cache: - cache_files = glob.glob(os.path.join(cache_dir, url_md5 + "_*")) - if len(cache_files) == 1: - filename = cache_files[0] - return filename if return_filename else open(filename, "rb") - - # Download. - url_name = None - url_data = None - with requests.Session() as session: - if verbose: - print("Downloading %s ..." % url, end="", flush=True) - for attempts_left in reversed(range(num_attempts)): - try: - with session.get(url) as res: - res.raise_for_status() - if len(res.content) == 0: - raise IOError("No data received") - - if len(res.content) < 8192: - content_str = res.content.decode("utf-8") - if "download_warning" in res.headers.get("Set-Cookie", ""): - links = [html.unescape(link) for link in content_str.split('"') if "export=download" in link] - if len(links) == 1: - url = requests.compat.urljoin(url, links[0]) - raise IOError("Google Drive virus checker nag") - if "Google Drive - Quota exceeded" in content_str: - raise IOError("Google Drive download quota exceeded -- please try again later") - - match = re.search(r'filename="([^"]*)"', res.headers.get("Content-Disposition", "")) - url_name = match[1] if match else url - url_data = res.content - if verbose: - print(" done") - break - except KeyboardInterrupt: - raise - except: - if not attempts_left: - if verbose: - print(" failed") - raise - if verbose: - print(".", end="", flush=True) - - # Save to cache. - if cache: - safe_name = re.sub(r"[^0-9a-zA-Z-._]", "_", url_name) - cache_file = os.path.join(cache_dir, url_md5 + "_" + safe_name) - temp_file = os.path.join(cache_dir, "tmp_" + uuid.uuid4().hex + "_" + url_md5 + "_" + safe_name) - os.makedirs(cache_dir, exist_ok=True) - with open(temp_file, "wb") as f: - f.write(url_data) - os.replace(temp_file, cache_file) # atomic - if return_filename: - return cache_file - - # Return data as file object. - assert not return_filename - return io.BytesIO(url_data) diff --git a/spaces/wydgg/bingo-wyd-ai/src/pages/api/blob.ts b/spaces/wydgg/bingo-wyd-ai/src/pages/api/blob.ts deleted file mode 100644 index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000 --- a/spaces/wydgg/bingo-wyd-ai/src/pages/api/blob.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { Readable } from 'node:stream' -import { fetch } from '@/lib/isomorphic' - -const API_DOMAIN = 'https://www.bing.com' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { bcid } = req.query - - const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`, - { - method: 'GET', - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referrer-Policy": "origin-when-cross-origin", - }, - }, - ) - - res.writeHead(200, { - 'Content-Length': headers.get('content-length')!, - 'Content-Type': headers.get('content-type')!, - }) - // @ts-ignore - return Readable.fromWeb(body!).pipe(res) - } catch (e) { - console.log('Error', e) - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/xfys/yolov5_tracking/val_utils/Readme.md b/spaces/xfys/yolov5_tracking/val_utils/Readme.md deleted file mode 100644 index 9273a6f75ba115a1738b0b73952903a049c01ae3..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/val_utils/Readme.md +++ /dev/null @@ -1,202 +0,0 @@ - -# TrackEval -*Code for evaluating object tracking.* - -This codebase provides code for a number of different tracking evaluation metrics (including the [HOTA metrics](https://link.springer.com/article/10.1007/s11263-020-01375-2)), as well as supporting running all of these metrics on a number of different tracking benchmarks. Plus plotting of results and other things one may want to do for tracking evaluation. - -## **NEW**: RobMOTS Challenge 2021 - -Call for submission to our [RobMOTS Challenge](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110) (Robust Multi-Object Tracking and Segmentation) held in conjunction with our [RVSU CVPR'21 Workshop](https://eval.vision.rwth-aachen.de/rvsu-workshop21/). Robust tracking evaluation against 8 tracking benchmarks. Challenge submission deadline June 15th. Also check out our workshop [call for papers](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=74). - -## Official Evaluation Code - -The following benchmarks use TrackEval as their official evaluation code, check out the links to see TrackEval in action: - - - **[RobMOTS](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110)** ([Official Readme](docs/RobMOTS-Official/Readme.md)) - - **[KITTI Tracking](http://www.cvlibs.net/datasets/kitti/eval_tracking.php)** - - **[KITTI MOTS](http://www.cvlibs.net/datasets/kitti/eval_mots.php)** - - **[MOTChallenge](https://motchallenge.net/)** ([Official Readme](docs/MOTChallenge-Official/Readme.md)) - - **[Open World Tracking](https://openworldtracking.github.io)** ([Official Readme](docs/OpenWorldTracking-Official)) - - **[PersonPath22](https://amazon-research.github.io/tracking-dataset/personpath22.html)** - - -If you run a tracking benchmark and want to use TrackEval as your official evaluation code, please contact Jonathon (contact details below). - -## Currently implemented metrics - -The following metrics are currently implemented: - -Metric Family | Sub metrics | Paper | Code | Notes | -|----- | ----------- |----- | ----------- | ----- | -| | | | | | -|**HOTA metrics**|HOTA, DetA, AssA, LocA, DetPr, DetRe, AssPr, AssRe|[paper](https://link.springer.com/article/10.1007/s11263-020-01375-2)|[code](trackeval/metrics/hota.py)|**Recommended tracking metric**| -|**CLEARMOT metrics**|MOTA, MOTP, MT, ML, Frag, etc.|[paper](https://link.springer.com/article/10.1155/2008/246309)|[code](trackeval/metrics/clear.py)| | -|**Identity metrics**|IDF1, IDP, IDR|[paper](https://arxiv.org/abs/1609.01775)|[code](trackeval/metrics/identity.py)| | -|**VACE metrics**|ATA, SFDA|[paper](https://link.springer.com/chapter/10.1007/11612704_16)|[code](trackeval/metrics/vace.py)| | -|**Track mAP metrics**|Track mAP|[paper](https://arxiv.org/abs/1905.04804)|[code](trackeval/metrics/track_map.py)|Requires confidence scores| -|**J & F metrics**|J&F, J, F|[paper](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Perazzi_A_Benchmark_Dataset_CVPR_2016_paper.pdf)|[code](trackeval/metrics/j_and_f.py)|Only for Seg Masks| -|**ID Euclidean**|ID Euclidean|[paper](https://arxiv.org/pdf/2103.13516.pdf)|[code](trackeval/metrics/ideucl.py)| | - - -## Currently implemented benchmarks - -The following benchmarks are currently implemented: - -Benchmark | Sub-benchmarks | Type | Website | Code | Data Format | -|----- | ----------- |----- | ----------- | ----- | ----- | -| | | | | | | -|**RobMOTS**|Combination of 8 benchmarks|Seg Masks|[website](https://eval.vision.rwth-aachen.de/rvsu-workshop21/?page_id=110)|[code](trackeval/datasets/rob_mots.py)|[format](docs/RobMOTS-Official/Readme.md)| -|**Open World Tracking**|TAO-OW|OpenWorld / Seg Masks|[website](https://openworldtracking.github.io)|[code](trackeval/datasets/tao_ow.py)|[format](docs/OpenWorldTracking-Official/Readme.md)| -|**MOTChallenge**|MOT15/16/17/20|2D BBox|[website](https://motchallenge.net/)|[code](trackeval/datasets/mot_challenge_2d_box.py)|[format](docs/MOTChallenge-format.txt)| -|**KITTI Tracking**| |2D BBox|[website](http://www.cvlibs.net/datasets/kitti/eval_tracking.php)|[code](trackeval/datasets/kitti_2d_box.py)|[format](docs/KITTI-format.txt)| -|**BDD-100k**| |2D BBox|[website](https://bdd-data.berkeley.edu/)|[code](trackeval/datasets/bdd100k.py)|[format](docs/BDD100k-format.txt)| -|**TAO**| |2D BBox|[website](https://taodataset.org/)|[code](trackeval/datasets/tao.py)|[format](docs/TAO-format.txt)| -|**MOTS**|KITTI-MOTS, MOTS-Challenge|Seg Mask|[website](https://www.vision.rwth-aachen.de/page/mots)|[code](trackeval/datasets/mots_challenge.py) and [code](trackeval/datasets/kitti_mots.py)|[format](docs/MOTS-format.txt)| -|**DAVIS**|Unsupervised|Seg Mask|[website](https://davischallenge.org/)|[code](trackeval/datasets/davis.py)|[format](docs/DAVIS-format.txt)| -|**YouTube-VIS**| |Seg Mask|[website](https://youtube-vos.org/dataset/vis/)|[code](trackeval/datasets/youtube_vis.py)|[format](docs/YouTube-VIS-format.txt)| -|**Head Tracking Challenge**| |2D BBox|[website](https://arxiv.org/pdf/2103.13516.pdf)|[code](trackeval/datasets/head_tracking_challenge.py)|[format](docs/MOTChallenge-format.txt)| -|**PersonPath22**| |2D BBox|[website](https://github.com/amazon-research/tracking-dataset)|[code](trackeval/datasets/person_path_22.py)|[format](docs/MOTChallenge-format.txt)| -|**BURST**| {Common, Long-tail, Open-world} Class-guided, {Point, Box, Mask} Exemplar-guided |Seg Mask|[website](https://github.com/Ali2500/BURST-benchmark)|[format](https://github.com/Ali2500/BURST-benchmark/blob/main/ANNOTATION_FORMAT.md)| - -## HOTA metrics - -This code is also the official reference implementation for the HOTA metrics: - -*[HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking](https://link.springer.com/article/10.1007/s11263-020-01375-2). IJCV 2020. Jonathon Luiten, Aljosa Osep, Patrick Dendorfer, Philip Torr, Andreas Geiger, Laura Leal-Taixe and Bastian Leibe.* - -HOTA is a novel set of MOT evaluation metrics which enable better understanding of tracking behavior than previous metrics. - -For more information check out the following links: - - [Short blog post on HOTA](https://jonathonluiten.medium.com/how-to-evaluate-tracking-with-the-hota-metrics-754036d183e1) - **HIGHLY RECOMMENDED READING** - - [IJCV version of paper](https://link.springer.com/article/10.1007/s11263-020-01375-2) (Open Access) - - [ArXiv version of paper](https://arxiv.org/abs/2009.07736) - - [Code](trackeval/metrics/hota.py) - -## Properties of this codebase - -The code is written 100% in python with only numpy and scipy as minimum requirements. - -The code is designed to be easily understandable and easily extendable. - -The code is also extremely fast, running at more than 10x the speed of the both [MOTChallengeEvalKit](https://github.com/dendorferpatrick/MOTChallengeEvalKit), and [py-motmetrics](https://github.com/cheind/py-motmetrics) (see detailed speed comparison below). - -The implementation of CLEARMOT and ID metrics aligns perfectly with the [MOTChallengeEvalKit](https://github.com/dendorferpatrick/MOTChallengeEvalKit). - -By default the code prints results to the screen, saves results out as both a summary txt file and a detailed results csv file, and outputs plots of the results. All outputs are by default saved to the 'tracker' folder for each tracker. - -## Running the code - -The code can be run in one of two ways: - - - From the terminal via one of the scripts [here](scripts/). See each script for instructions and arguments, hopefully this is self-explanatory. - - Directly by importing this package into your code, see the same scripts above for how. - -## Quickly evaluate on supported benchmarks - -To enable you to use TrackEval for evaluation as quickly and easily as possible, we provide ground-truth data, meta-data and example trackers for all currently supported benchmarks. -You can download this here: [data.zip](https://omnomnom.vision.rwth-aachen.de/data/TrackEval/data.zip) (~150mb). - -The data for RobMOTS is separate and can be found here: [rob_mots_train_data.zip](https://omnomnom.vision.rwth-aachen.de/data/RobMOTS/train_data.zip) (~750mb). - -The data for PersonPath22 is separate and can be found here: [person_path_22_data.zip](https://tracking-dataset-eccv-2022.s3.us-east-2.amazonaws.com/person_path_22_data.zip) (~3mb). - -The easiest way to begin is to extract this zip into the repository root folder such that the file paths look like: TrackEval/data/gt/... - -This then corresponds to the default paths in the code. You can now run each of the scripts [here](scripts/) without providing any arguments and they will by default evaluate all trackers present in the supplied file structure. To evaluate your own tracking results, simply copy your files as a new tracker folder into the file structure at the same level as the example trackers (MPNTrack, CIWT, track_rcnn, qdtrack, ags, Tracktor++, STEm_Seg), ensuring the same file structure for your trackers as in the example. - -Of course, if your ground-truth and tracker files are located somewhere else you can simply use the script arguments to point the code toward your data. - -To ensure your tracker outputs data in the correct format, check out our format guides for each of the supported benchmarks [here](docs), or check out the example trackers provided. - -## Evaluate on your own custom benchmark - -To evaluate on your own data, you have two options: - - Write custom dataset code (more effort, rarely worth it). - - Convert your current dataset and trackers to the same format of an already implemented benchmark. - -To convert formats, check out the format specifications defined [here](docs). - -By default, we would recommend the MOTChallenge format, although any implemented format should work. Note that for many cases you will want to use the argument ```--DO_PREPROC False``` unless you want to run preprocessing to remove distractor objects. - -## Requirements - Code tested on Python 3.7. - - - Minimum requirements: numpy, scipy - - For plotting: matplotlib - - For segmentation datasets (KITTI MOTS, MOTS-Challenge, DAVIS, YouTube-VIS): pycocotools - - For DAVIS dataset: Pillow - - For J & F metric: opencv_python, scikit_image - - For simples test-cases for metrics: pytest - -use ```pip3 -r install requirements.txt``` to install all possible requirements. - -use ```pip3 -r install minimum_requirments.txt``` to only install the minimum if you don't need the extra functionality as listed above. - -## Timing analysis - -Evaluating CLEAR + ID metrics on Lift_T tracker on MOT17-train (seconds) on a i7-9700K CPU with 8 physical cores (median of 3 runs): -Num Cores|TrackEval|MOTChallenge|Speedup vs MOTChallenge|py-motmetrics|Speedup vs py-motmetrics -:---|:---|:---|:---|:---|:--- -1|9.64|66.23|6.87x|99.65|10.34x -4|3.01|29.42|9.77x| |33.11x* -8|1.62|29.51|18.22x| |61.51x* - -*using a different number of cores as py-motmetrics doesn't allow multiprocessing. - -``` -python scripts/run_mot_challenge.py --BENCHMARK MOT17 --TRACKERS_TO_EVAL Lif_T --METRICS CLEAR Identity --USE_PARALLEL False --NUM_PARALLEL_CORES 1 -``` - -Evaluating CLEAR + ID metrics on LPC_MOT tracker on MOT20-train (seconds) on a i7-9700K CPU with 8 physical cores (median of 3 runs): -Num Cores|TrackEval|MOTChallenge|Speedup vs MOTChallenge|py-motmetrics|Speedup vs py-motmetrics -:---|:---|:---|:---|:---|:--- -1|18.63|105.3|5.65x|175.17|9.40x - -``` -python scripts/run_mot_challenge.py --BENCHMARK MOT20 --TRACKERS_TO_EVAL LPC_MOT --METRICS CLEAR Identity --USE_PARALLEL False --NUM_PARALLEL_CORES 1 -``` - -## License - -TrackEval is released under the [MIT License](LICENSE). - -## Contact - -If you encounter any problems with the code, please contact [Jonathon Luiten](https://www.vision.rwth-aachen.de/person/216/) ([luiten@vision.rwth-aachen.de](mailto:luiten@vision.rwth-aachen.de)). -If anything is unclear, or hard to use, please leave a comment either via email or as an issue and I would love to help. - -## Dedication - -This codebase was built for you, in order to make your life easier! For anyone doing research on tracking or using trackers, please don't hesitate to reach out with any comments or suggestions on how things could be improved. - -## Contributing - -We welcome contributions of new metrics and new supported benchmarks. Also any other new features or code improvements. Send a PR, an email, or open an issue detailing what you'd like to add/change to begin a conversation. - -## Citing TrackEval - -If you use this code in your research, please use the following BibTeX entry: - -```BibTeX -@misc{luiten2020trackeval, - author = {Jonathon Luiten, Arne Hoffhues}, - title = {TrackEval}, - howpublished = {\url{https://github.com/JonathonLuiten/TrackEval}}, - year = {2020} -} -``` - -Furthermore, if you use the HOTA metrics, please cite the following paper: - -``` -@article{luiten2020IJCV, - title={HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking}, - author={Luiten, Jonathon and Osep, Aljosa and Dendorfer, Patrick and Torr, Philip and Geiger, Andreas and Leal-Taix{\'e}, Laura and Leibe, Bastian}, - journal={International Journal of Computer Vision}, - pages={1--31}, - year={2020}, - publisher={Springer} -} -``` - -If you use any other metrics please also cite the relevant papers, and don't forget to cite each of the benchmarks you evaluate on. diff --git a/spaces/xiangdy/chatGPT/modules/__init__.py b/spaces/xiangdy/chatGPT/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/xnetba/Chat_advance/custom.css b/spaces/xnetba/Chat_advance/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/xnetba/Chat_advance/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/xu1998hz/sescore_german_mt/tests.py b/spaces/xu1998hz/sescore_german_mt/tests.py deleted file mode 100644 index 601ed757507caebec67493462d11eb4c8901c2a1..0000000000000000000000000000000000000000 --- a/spaces/xu1998hz/sescore_german_mt/tests.py +++ /dev/null @@ -1,17 +0,0 @@ -test_cases = [ - { - "predictions": [0, 0], - "references": [1, 1], - "result": {"metric_score": 0} - }, - { - "predictions": [1, 1], - "references": [1, 1], - "result": {"metric_score": 1} - }, - { - "predictions": [1, 0], - "references": [1, 1], - "result": {"metric_score": 0.5} - } -] \ No newline at end of file diff --git a/spaces/xuetao/bingo3/src/lib/isomorphic/browser.ts b/spaces/xuetao/bingo3/src/lib/isomorphic/browser.ts deleted file mode 100644 index de125b1f1786d1618cb1ff47f403d76c6784f4ce..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/src/lib/isomorphic/browser.ts +++ /dev/null @@ -1,11 +0,0 @@ -'use client' - -const debug = console.info.bind(console) - -class WebSocketAlias extends WebSocket { - constructor(address: string | URL, ...args: any) { - super(address) - } -} - -export default { fetch, WebSocket: WebSocketAlias, debug } diff --git a/spaces/xxie92/antibody_visulization/design_testset.py b/spaces/xxie92/antibody_visulization/design_testset.py deleted file mode 100644 index 63b6008200fdd26c88a36411e7afca1bf03ddc3e..0000000000000000000000000000000000000000 --- a/spaces/xxie92/antibody_visulization/design_testset.py +++ /dev/null @@ -1,4 +0,0 @@ -from diffab.tools.runner.design_for_testset import main - -if __name__ == '__main__': - main() diff --git a/spaces/yderre-aubay/midi-player-demo/src/common/selection/Selection.ts b/spaces/yderre-aubay/midi-player-demo/src/common/selection/Selection.ts deleted file mode 100644 index d82f3fc47079a1293f7e29a62388211a02021cff..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/common/selection/Selection.ts +++ /dev/null @@ -1,71 +0,0 @@ -import cloneDeep from "lodash/cloneDeep" -import { MaxNoteNumber } from "../../main/Constants" -import { IRect } from "../geometry" -import { NoteCoordTransform } from "../transform" -import { clampNotePoint, NotePoint } from "../transform/NotePoint" - -export interface Selection { - from: NotePoint - to: NotePoint -} - -export const getSelectionBounds = ( - selection: Selection, - transform: NoteCoordTransform, -): IRect => { - const left = transform.getX(selection.from.tick) - const right = transform.getX(selection.to.tick) - const top = transform.getY(selection.from.noteNumber) - const bottom = transform.getY(selection.to.noteNumber) - return { - x: left, - y: top, - width: right - left, - height: bottom - top, - } -} - -export const movedSelection = ( - selection: Selection, - dt: number, - dn: number, -): Selection => { - const s = cloneDeep(selection) - - s.from.tick += dt - s.to.tick += dt - s.from.noteNumber += dn - s.to.noteNumber += dn - - return s -} - -// to が右下になるようにする -// to Make the lower right - -export const regularizedSelection = ( - fromTick: number, - fromNoteNumber: number, - toTick: number, - toNoteNumber: number, -): Selection => ({ - from: { - tick: Math.max(0, Math.min(fromTick, toTick)), - noteNumber: Math.min( - MaxNoteNumber, - Math.max(0, Math.max(fromNoteNumber, toNoteNumber)), - ), - }, - to: { - tick: Math.max(fromTick, toTick), - noteNumber: Math.min( - MaxNoteNumber, - Math.max(0, Math.min(fromNoteNumber, toNoteNumber)), - ), - }, -}) - -export const clampSelection = (selection: Selection): Selection => ({ - from: clampNotePoint(selection.from), - to: clampNotePoint(selection.to), -}) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/commands/env.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/commands/env.py deleted file mode 100644 index 8567bbcf5b61e8a02151569c793099a5f3998fa0..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/commands/env.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import importlib.util -import os -import platform -from argparse import ArgumentParser - -import huggingface_hub - -from .. import __version__ as version -from ..utils import ( - is_accelerate_available, - is_flax_available, - is_safetensors_available, - is_tf_available, - is_torch_available, -) -from . import BaseTransformersCLICommand - - -def info_command_factory(_): - return EnvironmentCommand() - - -def download_command_factory(args): - return EnvironmentCommand(args.accelerate_config_file) - - -class EnvironmentCommand(BaseTransformersCLICommand): - @staticmethod - def register_subcommand(parser: ArgumentParser): - download_parser = parser.add_parser("env") - download_parser.set_defaults(func=info_command_factory) - download_parser.add_argument( - "--accelerate-config_file", - default=None, - help="The accelerate config file to use for the default values in the launching script.", - ) - download_parser.set_defaults(func=download_command_factory) - - def __init__(self, accelerate_config_file, *args) -> None: - self._accelerate_config_file = accelerate_config_file - - def run(self): - safetensors_version = "not installed" - if is_safetensors_available(): - import safetensors - - safetensors_version = safetensors.__version__ - elif importlib.util.find_spec("safetensors") is not None: - import safetensors - - safetensors_version = f"{safetensors.__version__} but is ignored because of PyTorch version too old." - - accelerate_version = "not installed" - accelerate_config = accelerate_config_str = "not found" - if is_accelerate_available(): - import accelerate - from accelerate.commands.config import default_config_file, load_config_from_file - - accelerate_version = accelerate.__version__ - # Get the default from the config file. - if self._accelerate_config_file is not None or os.path.isfile(default_config_file): - accelerate_config = load_config_from_file(self._accelerate_config_file).to_dict() - - accelerate_config_str = ( - "\n".join([f"\t- {prop}: {val}" for prop, val in accelerate_config.items()]) - if isinstance(accelerate_config, dict) - else f"\t{accelerate_config}" - ) - - pt_version = "not installed" - pt_cuda_available = "NA" - if is_torch_available(): - import torch - - pt_version = torch.__version__ - pt_cuda_available = torch.cuda.is_available() - - tf_version = "not installed" - tf_cuda_available = "NA" - if is_tf_available(): - import tensorflow as tf - - tf_version = tf.__version__ - try: - # deprecated in v2.1 - tf_cuda_available = tf.test.is_gpu_available() - except AttributeError: - # returns list of devices, convert to bool - tf_cuda_available = bool(tf.config.list_physical_devices("GPU")) - - flax_version = "not installed" - jax_version = "not installed" - jaxlib_version = "not installed" - jax_backend = "NA" - if is_flax_available(): - import flax - import jax - import jaxlib - - flax_version = flax.__version__ - jax_version = jax.__version__ - jaxlib_version = jaxlib.__version__ - jax_backend = jax.lib.xla_bridge.get_backend().platform - - info = { - "`transformers` version": version, - "Platform": platform.platform(), - "Python version": platform.python_version(), - "Huggingface_hub version": huggingface_hub.__version__, - "Safetensors version": f"{safetensors_version}", - "Accelerate version": f"{accelerate_version}", - "Accelerate config": f"{accelerate_config_str}", - "PyTorch version (GPU?)": f"{pt_version} ({pt_cuda_available})", - "Tensorflow version (GPU?)": f"{tf_version} ({tf_cuda_available})", - "Flax version (CPU?/GPU?/TPU?)": f"{flax_version} ({jax_backend})", - "Jax version": f"{jax_version}", - "JaxLib version": f"{jaxlib_version}", - "Using GPU in script?": "", - "Using distributed or parallel set-up in script?": "", - } - - print("\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\n") - print(self.format_dict(info)) - - return info - - @staticmethod - def format_dict(d): - return "\n".join([f"- {prop}: {val}" for prop, val in d.items()]) + "\n" diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/byt5/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/byt5/__init__.py deleted file mode 100644 index 662a427383ff693bde17e96b0f74264442a1cc0f..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/byt5/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright 2021 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from ...utils import _LazyModule - - -_import_structure = {"tokenization_byt5": ["ByT5Tokenizer"]} - - -if TYPE_CHECKING: - from .tokenization_byt5 import ByT5Tokenizer -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/cvt/modeling_cvt.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/cvt/modeling_cvt.py deleted file mode 100644 index d21b5c9a8749a6544ad0fb590be88927f63d0ab9..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/cvt/modeling_cvt.py +++ /dev/null @@ -1,733 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Microsoft Research and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch CvT model.""" - - -import collections.abc -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...file_utils import add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward -from ...modeling_outputs import ImageClassifierOutputWithNoAttention, ModelOutput -from ...modeling_utils import PreTrainedModel, find_pruneable_heads_and_indices, prune_linear_layer -from ...utils import logging -from .configuration_cvt import CvtConfig - - -logger = logging.get_logger(__name__) - -# General docstring -_CONFIG_FOR_DOC = "CvtConfig" - -# Base docstring -_CHECKPOINT_FOR_DOC = "microsoft/cvt-13" -_EXPECTED_OUTPUT_SHAPE = [1, 384, 14, 14] - -# Image classification docstring -_IMAGE_CLASS_CHECKPOINT = "microsoft/cvt-13" -_IMAGE_CLASS_EXPECTED_OUTPUT = "tabby, tabby cat" - - -CVT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "microsoft/cvt-13", - "microsoft/cvt-13-384", - "microsoft/cvt-13-384-22k", - "microsoft/cvt-21", - "microsoft/cvt-21-384", - "microsoft/cvt-21-384-22k", - # See all Cvt models at https://huggingface.co/models?filter=cvt -] - - -@dataclass -class BaseModelOutputWithCLSToken(ModelOutput): - """ - Base class for model's outputs, with potential hidden states and attentions. - - Args: - last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - cls_token_value (`torch.FloatTensor` of shape `(batch_size, 1, hidden_size)`): - Classification token at the output of the last layer of the model. - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of - shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer - plus the initial embedding outputs. - """ - - last_hidden_state: torch.FloatTensor = None - cls_token_value: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - - -# Copied from transformers.models.beit.modeling_beit.drop_path -def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor: - """ - Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - - Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks, - however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the - layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the - argument. - """ - if drop_prob == 0.0 or not training: - return input - keep_prob = 1 - drop_prob - shape = (input.shape[0],) + (1,) * (input.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=input.dtype, device=input.device) - random_tensor.floor_() # binarize - output = input.div(keep_prob) * random_tensor - return output - - -# Copied from transformers.models.beit.modeling_beit.BeitDropPath -class CvtDropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" - - def __init__(self, drop_prob: Optional[float] = None) -> None: - super().__init__() - self.drop_prob = drop_prob - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - return drop_path(hidden_states, self.drop_prob, self.training) - - def extra_repr(self) -> str: - return "p={}".format(self.drop_prob) - - -class CvtEmbeddings(nn.Module): - """ - Construct the CvT embeddings. - """ - - def __init__(self, patch_size, num_channels, embed_dim, stride, padding, dropout_rate): - super().__init__() - self.convolution_embeddings = CvtConvEmbeddings( - patch_size=patch_size, num_channels=num_channels, embed_dim=embed_dim, stride=stride, padding=padding - ) - self.dropout = nn.Dropout(dropout_rate) - - def forward(self, pixel_values): - hidden_state = self.convolution_embeddings(pixel_values) - hidden_state = self.dropout(hidden_state) - return hidden_state - - -class CvtConvEmbeddings(nn.Module): - """ - Image to Conv Embedding. - """ - - def __init__(self, patch_size, num_channels, embed_dim, stride, padding): - super().__init__() - patch_size = patch_size if isinstance(patch_size, collections.abc.Iterable) else (patch_size, patch_size) - self.patch_size = patch_size - self.projection = nn.Conv2d(num_channels, embed_dim, kernel_size=patch_size, stride=stride, padding=padding) - self.normalization = nn.LayerNorm(embed_dim) - - def forward(self, pixel_values): - pixel_values = self.projection(pixel_values) - batch_size, num_channels, height, width = pixel_values.shape - hidden_size = height * width - # rearrange "b c h w -> b (h w) c" - pixel_values = pixel_values.view(batch_size, num_channels, hidden_size).permute(0, 2, 1) - if self.normalization: - pixel_values = self.normalization(pixel_values) - # rearrange "b (h w) c" -> b c h w" - pixel_values = pixel_values.permute(0, 2, 1).view(batch_size, num_channels, height, width) - return pixel_values - - -class CvtSelfAttentionConvProjection(nn.Module): - def __init__(self, embed_dim, kernel_size, padding, stride): - super().__init__() - self.convolution = nn.Conv2d( - embed_dim, - embed_dim, - kernel_size=kernel_size, - padding=padding, - stride=stride, - bias=False, - groups=embed_dim, - ) - self.normalization = nn.BatchNorm2d(embed_dim) - - def forward(self, hidden_state): - hidden_state = self.convolution(hidden_state) - hidden_state = self.normalization(hidden_state) - return hidden_state - - -class CvtSelfAttentionLinearProjection(nn.Module): - def forward(self, hidden_state): - batch_size, num_channels, height, width = hidden_state.shape - hidden_size = height * width - # rearrange " b c h w -> b (h w) c" - hidden_state = hidden_state.view(batch_size, num_channels, hidden_size).permute(0, 2, 1) - return hidden_state - - -class CvtSelfAttentionProjection(nn.Module): - def __init__(self, embed_dim, kernel_size, padding, stride, projection_method="dw_bn"): - super().__init__() - if projection_method == "dw_bn": - self.convolution_projection = CvtSelfAttentionConvProjection(embed_dim, kernel_size, padding, stride) - self.linear_projection = CvtSelfAttentionLinearProjection() - - def forward(self, hidden_state): - hidden_state = self.convolution_projection(hidden_state) - hidden_state = self.linear_projection(hidden_state) - return hidden_state - - -class CvtSelfAttention(nn.Module): - def __init__( - self, - num_heads, - embed_dim, - kernel_size, - padding_q, - padding_kv, - stride_q, - stride_kv, - qkv_projection_method, - qkv_bias, - attention_drop_rate, - with_cls_token=True, - **kwargs, - ): - super().__init__() - self.scale = embed_dim**-0.5 - self.with_cls_token = with_cls_token - self.embed_dim = embed_dim - self.num_heads = num_heads - - self.convolution_projection_query = CvtSelfAttentionProjection( - embed_dim, - kernel_size, - padding_q, - stride_q, - projection_method="linear" if qkv_projection_method == "avg" else qkv_projection_method, - ) - self.convolution_projection_key = CvtSelfAttentionProjection( - embed_dim, kernel_size, padding_kv, stride_kv, projection_method=qkv_projection_method - ) - self.convolution_projection_value = CvtSelfAttentionProjection( - embed_dim, kernel_size, padding_kv, stride_kv, projection_method=qkv_projection_method - ) - - self.projection_query = nn.Linear(embed_dim, embed_dim, bias=qkv_bias) - self.projection_key = nn.Linear(embed_dim, embed_dim, bias=qkv_bias) - self.projection_value = nn.Linear(embed_dim, embed_dim, bias=qkv_bias) - - self.dropout = nn.Dropout(attention_drop_rate) - - def rearrange_for_multi_head_attention(self, hidden_state): - batch_size, hidden_size, _ = hidden_state.shape - head_dim = self.embed_dim // self.num_heads - # rearrange 'b t (h d) -> b h t d' - return hidden_state.view(batch_size, hidden_size, self.num_heads, head_dim).permute(0, 2, 1, 3) - - def forward(self, hidden_state, height, width): - if self.with_cls_token: - cls_token, hidden_state = torch.split(hidden_state, [1, height * width], 1) - batch_size, hidden_size, num_channels = hidden_state.shape - # rearrange "b (h w) c -> b c h w" - hidden_state = hidden_state.permute(0, 2, 1).view(batch_size, num_channels, height, width) - - key = self.convolution_projection_key(hidden_state) - query = self.convolution_projection_query(hidden_state) - value = self.convolution_projection_value(hidden_state) - - if self.with_cls_token: - query = torch.cat((cls_token, query), dim=1) - key = torch.cat((cls_token, key), dim=1) - value = torch.cat((cls_token, value), dim=1) - - head_dim = self.embed_dim // self.num_heads - - query = self.rearrange_for_multi_head_attention(self.projection_query(query)) - key = self.rearrange_for_multi_head_attention(self.projection_key(key)) - value = self.rearrange_for_multi_head_attention(self.projection_value(value)) - - attention_score = torch.einsum("bhlk,bhtk->bhlt", [query, key]) * self.scale - attention_probs = torch.nn.functional.softmax(attention_score, dim=-1) - attention_probs = self.dropout(attention_probs) - - context = torch.einsum("bhlt,bhtv->bhlv", [attention_probs, value]) - # rearrange"b h t d -> b t (h d)" - _, _, hidden_size, _ = context.shape - context = context.permute(0, 2, 1, 3).contiguous().view(batch_size, hidden_size, self.num_heads * head_dim) - return context - - -class CvtSelfOutput(nn.Module): - """ - The residual connection is defined in CvtLayer instead of here (as is the case with other models), due to the - layernorm applied before each block. - """ - - def __init__(self, embed_dim, drop_rate): - super().__init__() - self.dense = nn.Linear(embed_dim, embed_dim) - self.dropout = nn.Dropout(drop_rate) - - def forward(self, hidden_state, input_tensor): - hidden_state = self.dense(hidden_state) - hidden_state = self.dropout(hidden_state) - return hidden_state - - -class CvtAttention(nn.Module): - def __init__( - self, - num_heads, - embed_dim, - kernel_size, - padding_q, - padding_kv, - stride_q, - stride_kv, - qkv_projection_method, - qkv_bias, - attention_drop_rate, - drop_rate, - with_cls_token=True, - ): - super().__init__() - self.attention = CvtSelfAttention( - num_heads, - embed_dim, - kernel_size, - padding_q, - padding_kv, - stride_q, - stride_kv, - qkv_projection_method, - qkv_bias, - attention_drop_rate, - with_cls_token, - ) - self.output = CvtSelfOutput(embed_dim, drop_rate) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.attention.num_attention_heads, self.attention.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.attention.query = prune_linear_layer(self.attention.query, index) - self.attention.key = prune_linear_layer(self.attention.key, index) - self.attention.value = prune_linear_layer(self.attention.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.attention.num_attention_heads = self.attention.num_attention_heads - len(heads) - self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward(self, hidden_state, height, width): - self_output = self.attention(hidden_state, height, width) - attention_output = self.output(self_output, hidden_state) - return attention_output - - -class CvtIntermediate(nn.Module): - def __init__(self, embed_dim, mlp_ratio): - super().__init__() - self.dense = nn.Linear(embed_dim, int(embed_dim * mlp_ratio)) - self.activation = nn.GELU() - - def forward(self, hidden_state): - hidden_state = self.dense(hidden_state) - hidden_state = self.activation(hidden_state) - return hidden_state - - -class CvtOutput(nn.Module): - def __init__(self, embed_dim, mlp_ratio, drop_rate): - super().__init__() - self.dense = nn.Linear(int(embed_dim * mlp_ratio), embed_dim) - self.dropout = nn.Dropout(drop_rate) - - def forward(self, hidden_state, input_tensor): - hidden_state = self.dense(hidden_state) - hidden_state = self.dropout(hidden_state) - hidden_state = hidden_state + input_tensor - return hidden_state - - -class CvtLayer(nn.Module): - """ - CvtLayer composed by attention layers, normalization and multi-layer perceptrons (mlps). - """ - - def __init__( - self, - num_heads, - embed_dim, - kernel_size, - padding_q, - padding_kv, - stride_q, - stride_kv, - qkv_projection_method, - qkv_bias, - attention_drop_rate, - drop_rate, - mlp_ratio, - drop_path_rate, - with_cls_token=True, - ): - super().__init__() - self.attention = CvtAttention( - num_heads, - embed_dim, - kernel_size, - padding_q, - padding_kv, - stride_q, - stride_kv, - qkv_projection_method, - qkv_bias, - attention_drop_rate, - drop_rate, - with_cls_token, - ) - - self.intermediate = CvtIntermediate(embed_dim, mlp_ratio) - self.output = CvtOutput(embed_dim, mlp_ratio, drop_rate) - self.drop_path = CvtDropPath(drop_prob=drop_path_rate) if drop_path_rate > 0.0 else nn.Identity() - self.layernorm_before = nn.LayerNorm(embed_dim) - self.layernorm_after = nn.LayerNorm(embed_dim) - - def forward(self, hidden_state, height, width): - self_attention_output = self.attention( - self.layernorm_before(hidden_state), # in Cvt, layernorm is applied before self-attention - height, - width, - ) - attention_output = self_attention_output - attention_output = self.drop_path(attention_output) - - # first residual connection - hidden_state = attention_output + hidden_state - - # in Cvt, layernorm is also applied after self-attention - layer_output = self.layernorm_after(hidden_state) - layer_output = self.intermediate(layer_output) - - # second residual connection is done here - layer_output = self.output(layer_output, hidden_state) - layer_output = self.drop_path(layer_output) - return layer_output - - -class CvtStage(nn.Module): - def __init__(self, config, stage): - super().__init__() - self.config = config - self.stage = stage - if self.config.cls_token[self.stage]: - self.cls_token = nn.Parameter(torch.randn(1, 1, self.config.embed_dim[-1])) - - self.embedding = CvtEmbeddings( - patch_size=config.patch_sizes[self.stage], - stride=config.patch_stride[self.stage], - num_channels=config.num_channels if self.stage == 0 else config.embed_dim[self.stage - 1], - embed_dim=config.embed_dim[self.stage], - padding=config.patch_padding[self.stage], - dropout_rate=config.drop_rate[self.stage], - ) - - drop_path_rates = [x.item() for x in torch.linspace(0, config.drop_path_rate[self.stage], config.depth[stage])] - - self.layers = nn.Sequential( - *[ - CvtLayer( - num_heads=config.num_heads[self.stage], - embed_dim=config.embed_dim[self.stage], - kernel_size=config.kernel_qkv[self.stage], - padding_q=config.padding_q[self.stage], - padding_kv=config.padding_kv[self.stage], - stride_kv=config.stride_kv[self.stage], - stride_q=config.stride_q[self.stage], - qkv_projection_method=config.qkv_projection_method[self.stage], - qkv_bias=config.qkv_bias[self.stage], - attention_drop_rate=config.attention_drop_rate[self.stage], - drop_rate=config.drop_rate[self.stage], - drop_path_rate=drop_path_rates[self.stage], - mlp_ratio=config.mlp_ratio[self.stage], - with_cls_token=config.cls_token[self.stage], - ) - for _ in range(config.depth[self.stage]) - ] - ) - - def forward(self, hidden_state): - cls_token = None - hidden_state = self.embedding(hidden_state) - batch_size, num_channels, height, width = hidden_state.shape - # rearrange b c h w -> b (h w) c" - hidden_state = hidden_state.view(batch_size, num_channels, height * width).permute(0, 2, 1) - if self.config.cls_token[self.stage]: - cls_token = self.cls_token.expand(batch_size, -1, -1) - hidden_state = torch.cat((cls_token, hidden_state), dim=1) - - for layer in self.layers: - layer_outputs = layer(hidden_state, height, width) - hidden_state = layer_outputs - - if self.config.cls_token[self.stage]: - cls_token, hidden_state = torch.split(hidden_state, [1, height * width], 1) - hidden_state = hidden_state.permute(0, 2, 1).view(batch_size, num_channels, height, width) - return hidden_state, cls_token - - -class CvtEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.stages = nn.ModuleList([]) - for stage_idx in range(len(config.depth)): - self.stages.append(CvtStage(config, stage_idx)) - - def forward(self, pixel_values, output_hidden_states=False, return_dict=True): - all_hidden_states = () if output_hidden_states else None - hidden_state = pixel_values - - cls_token = None - for _, (stage_module) in enumerate(self.stages): - hidden_state, cls_token = stage_module(hidden_state) - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_state,) - - if not return_dict: - return tuple(v for v in [hidden_state, cls_token, all_hidden_states] if v is not None) - - return BaseModelOutputWithCLSToken( - last_hidden_state=hidden_state, - cls_token_value=cls_token, - hidden_states=all_hidden_states, - ) - - -class CvtPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = CvtConfig - base_model_prefix = "cvt" - main_input_name = "pixel_values" - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Conv2d)): - module.weight.data = nn.init.trunc_normal_(module.weight.data, mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - elif isinstance(module, CvtStage): - if self.config.cls_token[module.stage]: - module.cls_token.data = nn.init.trunc_normal_( - torch.zeros(1, 1, self.config.embed_dim[-1]), mean=0.0, std=self.config.initializer_range - ) - - -CVT_START_DOCSTRING = r""" - This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it - as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and - behavior. - - Parameters: - config ([`CvtConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -CVT_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See [`CvtImageProcessor.__call__`] - for details. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~file_utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare Cvt Model transformer outputting raw hidden-states without any specific head on top.", - CVT_START_DOCSTRING, -) -class CvtModel(CvtPreTrainedModel): - def __init__(self, config, add_pooling_layer=True): - super().__init__(config) - self.config = config - self.encoder = CvtEncoder(config) - self.post_init() - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - @add_start_docstrings_to_model_forward(CVT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithCLSToken, - config_class=_CONFIG_FOR_DOC, - modality="vision", - expected_output=_EXPECTED_OUTPUT_SHAPE, - ) - def forward( - self, - pixel_values: Optional[torch.Tensor] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithCLSToken]: - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - encoder_outputs = self.encoder( - pixel_values, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = encoder_outputs[0] - - if not return_dict: - return (sequence_output,) + encoder_outputs[1:] - - return BaseModelOutputWithCLSToken( - last_hidden_state=sequence_output, - cls_token_value=encoder_outputs.cls_token_value, - hidden_states=encoder_outputs.hidden_states, - ) - - -@add_start_docstrings( - """ - Cvt Model transformer with an image classification head on top (a linear layer on top of the final hidden state of - the [CLS] token) e.g. for ImageNet. - """, - CVT_START_DOCSTRING, -) -class CvtForImageClassification(CvtPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.num_labels = config.num_labels - self.cvt = CvtModel(config, add_pooling_layer=False) - self.layernorm = nn.LayerNorm(config.embed_dim[-1]) - # Classifier head - self.classifier = ( - nn.Linear(config.embed_dim[-1], config.num_labels) if config.num_labels > 0 else nn.Identity() - ) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(CVT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_IMAGE_CLASS_CHECKPOINT, - output_type=ImageClassifierOutputWithNoAttention, - config_class=_CONFIG_FOR_DOC, - expected_output=_IMAGE_CLASS_EXPECTED_OUTPUT, - ) - def forward( - self, - pixel_values: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, ImageClassifierOutputWithNoAttention]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the image classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - outputs = self.cvt( - pixel_values, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - cls_token = outputs[1] - if self.config.cls_token[-1]: - sequence_output = self.layernorm(cls_token) - else: - batch_size, num_channels, height, width = sequence_output.shape - # rearrange "b c h w -> b (h w) c" - sequence_output = sequence_output.view(batch_size, num_channels, height * width).permute(0, 2, 1) - sequence_output = self.layernorm(sequence_output) - - sequence_output_mean = sequence_output.mean(dim=1) - logits = self.classifier(sequence_output_mean) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.config.num_labels == 1: - self.config.problem_type = "regression" - elif self.config.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.config.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return ImageClassifierOutputWithNoAttention(loss=loss, logits=logits, hidden_states=outputs.hidden_states) diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/DPHubert.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/DPHubert.py deleted file mode 100644 index 95b98b8b2e08e76139ce652bbbdb60dc42248a19..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/DPHubert.py +++ /dev/null @@ -1,26 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import torch -from vencoder.dphubert.model import wav2vec2_model - -class DPHubert(SpeechEncoder): - def __init__(self,vec_path = "pretrain/DPHuBERT-sp0.75.pth",device=None): - print("load model(s) from {}".format(vec_path)) - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - ckpt = torch.load(vec_path) - self.hidden_dim = 768 - self.model = wav2vec2_model(**ckpt["config"]).to(self.dev) - self.model.load_state_dict(ckpt["state_dict"], strict=False) - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats[None,:] - with torch.no_grad(): - with torch.inference_mode(): - units = self.model(feats)[0] - return units.transpose(1,2) diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/train.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/train.py deleted file mode 100644 index dba77bbb563d2ea12ced5424d4fe9088f9c84a42..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/train.py +++ /dev/null @@ -1,331 +0,0 @@ -import logging -import multiprocessing -import time - -logging.getLogger('matplotlib').setLevel(logging.WARNING) -logging.getLogger('numba').setLevel(logging.WARNING) - -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler - -import modules.commons as commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioCollate -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from modules.losses import ( - kl_loss, - generator_loss, discriminator_loss, feature_loss -) - -from modules.mel_processing import mel_spectrogram_torch, spec_to_mel_torch - -torch.backends.cudnn.benchmark = True -global_step = 0 -start_time = time.time() - -# os.environ['TORCH_DISTRIBUTED_DEBUG'] = 'INFO' - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - hps = utils.get_hparams() - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = hps.train.port - - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - # for pytorch on win, backend use gloo - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - collate_fn = TextAudioCollate() - all_in_mem = hps.train.all_in_mem # If you have enough memory, turn on this option to avoid disk IO and speed up training. - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps, all_in_mem=all_in_mem) - num_workers = 5 if multiprocessing.cpu_count() > 4 else multiprocessing.cpu_count() - if all_in_mem: - num_workers = 0 - train_loader = DataLoader(train_dataset, num_workers=num_workers, shuffle=False, pin_memory=True, - batch_size=hps.train.batch_size, collate_fn=collate_fn) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps, all_in_mem=all_in_mem,vol_aug = False) - eval_loader = DataLoader(eval_dataset, num_workers=1, shuffle=False, - batch_size=1, pin_memory=False, - drop_last=False, collate_fn=collate_fn) - - net_g = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) # , find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank]) - - skip_optimizer = False - try: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer) - epoch_str = max(epoch_str, 1) - name=utils.latest_checkpoint_path(hps.model_dir, "D_*.pth") - global_step=int(name[name.rfind("_")+1:name.rfind(".")])+1 - #global_step = (epoch_str - 1) * len(train_loader) - except: - print("load old checkpoint failed...") - epoch_str = 1 - global_step = 0 - if skip_optimizer: - epoch_str = 1 - global_step = 0 - - warmup_epoch = hps.train.warmup_epochs - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - # set up warm-up learning rate - if epoch <= warmup_epoch: - for param_group in optim_g.param_groups: - param_group['lr'] = hps.train.learning_rate / warmup_epoch * epoch - for param_group in optim_d.param_groups: - param_group['lr'] = hps.train.learning_rate / warmup_epoch * epoch - # training - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, - [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, - [train_loader, None], None, None) - # update learning rate - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - # train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, items in enumerate(train_loader): - c, f0, spec, y, spk, lengths, uv,volume = items - g = spk.cuda(rank, non_blocking=True) - spec, y = spec.cuda(rank, non_blocking=True), y.cuda(rank, non_blocking=True) - c = c.cuda(rank, non_blocking=True) - f0 = f0.cuda(rank, non_blocking=True) - uv = uv.cuda(rank, non_blocking=True) - lengths = lengths.cuda(rank, non_blocking=True) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - - with autocast(enabled=hps.train.fp16_run): - y_hat, ids_slice, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), pred_lf0, norm_lf0, lf0 = net_g(c, f0, uv, spec, g=g, c_lengths=lengths, - spec_lengths=lengths,vol = volume) - - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_lf0 = F.mse_loss(pred_lf0, lf0) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl + loss_lf0 - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_kl] - reference_loss=0 - for i in losses: - reference_loss += i - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info(f"Losses: {[x.item() for x in losses]}, step: {global_step}, lr: {lr}, reference_loss: {reference_loss}") - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/kl": loss_kl, - "loss/g/lf0": loss_lf0}) - - # scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - # scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - # scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/lf0": utils.plot_data_to_numpy(lf0[0, 0, :].cpu().numpy(), - pred_lf0[0, 0, :].detach().cpu().numpy()), - "all/norm_lf0": utils.plot_data_to_numpy(lf0[0, 0, :].cpu().numpy(), - norm_lf0[0, 0, :].detach().cpu().numpy()) - } - - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict - ) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 0) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - global_step += 1 - - if rank == 0: - global start_time - now = time.time() - durtaion = format(now - start_time, '.2f') - logger.info(f'====> Epoch: {epoch}, cost {durtaion} s') - start_time = now - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - with torch.no_grad(): - for batch_idx, items in enumerate(eval_loader): - c, f0, spec, y, spk, _, uv,volume = items - g = spk[:1].cuda(0) - spec, y = spec[:1].cuda(0), y[:1].cuda(0) - c = c[:1].cuda(0) - f0 = f0[:1].cuda(0) - uv= uv[:1].cuda(0) - if volume!=None: - volume = volume[:1].cuda(0) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat,_ = generator.module.infer(c, f0, uv, g=g,vol = volume) - - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - audio_dict.update({ - f"gen/audio_{batch_idx}": y_hat[0], - f"gt/audio_{batch_idx}": y[0] - }) - image_dict.update({ - f"gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()), - "gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy()) - }) - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/evaluation/__init__.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/evaluation/__init__.py deleted file mode 100644 index d96609e8f2261a6800fe85fcf3e1eaeaa44455c6..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/evaluation/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .cityscapes_evaluation import CityscapesInstanceEvaluator, CityscapesSemSegEvaluator -from .coco_evaluation import COCOEvaluator -from .rotated_coco_evaluation import RotatedCOCOEvaluator -from .evaluator import DatasetEvaluator, DatasetEvaluators, inference_context, inference_on_dataset -from .lvis_evaluation import LVISEvaluator -from .panoptic_evaluation import COCOPanopticEvaluator -from .pascal_voc_evaluation import PascalVOCDetectionEvaluator -from .sem_seg_evaluation import SemSegEvaluator -from .testing import print_csv_format, verify_results - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/utils/collect_env.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/utils/collect_env.py deleted file mode 100644 index 807b6c7e6245d0a21221b1b8d29b841ec8251761..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/utils/collect_env.py +++ /dev/null @@ -1,242 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import importlib -import numpy as np -import os -import re -import subprocess -import sys -from collections import defaultdict -import PIL -import torch -import torchvision -from tabulate import tabulate - -__all__ = ["collect_env_info"] - - -def collect_torch_env(): - try: - import torch.__config__ - - return torch.__config__.show() - except ImportError: - # compatible with older versions of pytorch - from torch.utils.collect_env import get_pretty_env_info - - return get_pretty_env_info() - - -def get_env_module(): - var_name = "DETECTRON2_ENV_MODULE" - return var_name, os.environ.get(var_name, "") - - -def detect_compute_compatibility(CUDA_HOME, so_file): - try: - cuobjdump = os.path.join(CUDA_HOME, "bin", "cuobjdump") - if os.path.isfile(cuobjdump): - output = subprocess.check_output( - "'{}' --list-elf '{}'".format(cuobjdump, so_file), shell=True - ) - output = output.decode("utf-8").strip().split("\n") - arch = [] - for line in output: - line = re.findall(r"\.sm_([0-9]*)\.", line)[0] - arch.append(".".join(line)) - arch = sorted(set(arch)) - return ", ".join(arch) - else: - return so_file + "; cannot find cuobjdump" - except Exception: - # unhandled failure - return so_file - - -def collect_env_info(): - has_gpu = torch.cuda.is_available() # true for both CUDA & ROCM - torch_version = torch.__version__ - - # NOTE that CUDA_HOME/ROCM_HOME could be None even when CUDA runtime libs are functional - from torch.utils.cpp_extension import CUDA_HOME, ROCM_HOME - - has_rocm = False - if (getattr(torch.version, "hip", None) is not None) and (ROCM_HOME is not None): - has_rocm = True - has_cuda = has_gpu and (not has_rocm) - - data = [] - data.append(("sys.platform", sys.platform)) # check-template.yml depends on it - data.append(("Python", sys.version.replace("\n", ""))) - data.append(("numpy", np.__version__)) - - try: - import detectron2 # noqa - - data.append( - ("detectron2", detectron2.__version__ + " @" + os.path.dirname(detectron2.__file__)) - ) - except ImportError: - data.append(("detectron2", "failed to import")) - except AttributeError: - data.append(("detectron2", "imported a wrong installation")) - - try: - import detectron2._C as _C - except ImportError as e: - data.append(("detectron2._C", f"not built correctly: {e}")) - - # print system compilers when extension fails to build - if sys.platform != "win32": # don't know what to do for windows - try: - # this is how torch/utils/cpp_extensions.py choose compiler - cxx = os.environ.get("CXX", "c++") - cxx = subprocess.check_output("'{}' --version".format(cxx), shell=True) - cxx = cxx.decode("utf-8").strip().split("\n")[0] - except subprocess.SubprocessError: - cxx = "Not found" - data.append(("Compiler ($CXX)", cxx)) - - if has_cuda and CUDA_HOME is not None: - try: - nvcc = os.path.join(CUDA_HOME, "bin", "nvcc") - nvcc = subprocess.check_output("'{}' -V".format(nvcc), shell=True) - nvcc = nvcc.decode("utf-8").strip().split("\n")[-1] - except subprocess.SubprocessError: - nvcc = "Not found" - data.append(("CUDA compiler", nvcc)) - if has_cuda and sys.platform != "win32": - try: - so_file = importlib.util.find_spec("detectron2._C").origin - except (ImportError, AttributeError): - pass - else: - data.append( - ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, so_file)) - ) - else: - # print compilers that are used to build extension - data.append(("Compiler", _C.get_compiler_version())) - data.append(("CUDA compiler", _C.get_cuda_version())) # cuda or hip - if has_cuda and getattr(_C, "has_cuda", lambda: True)(): - data.append( - ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, _C.__file__)) - ) - - data.append(get_env_module()) - data.append(("PyTorch", torch_version + " @" + os.path.dirname(torch.__file__))) - data.append(("PyTorch debug build", torch.version.debug)) - - if not has_gpu: - has_gpu_text = "No: torch.cuda.is_available() == False" - else: - has_gpu_text = "Yes" - data.append(("GPU available", has_gpu_text)) - if has_gpu: - devices = defaultdict(list) - for k in range(torch.cuda.device_count()): - cap = ".".join((str(x) for x in torch.cuda.get_device_capability(k))) - name = torch.cuda.get_device_name(k) + f" (arch={cap})" - devices[name].append(str(k)) - for name, devids in devices.items(): - data.append(("GPU " + ",".join(devids), name)) - - if has_rocm: - msg = " - invalid!" if not (ROCM_HOME and os.path.isdir(ROCM_HOME)) else "" - data.append(("ROCM_HOME", str(ROCM_HOME) + msg)) - else: - try: - from torch.utils.collect_env import get_nvidia_driver_version, run as _run - - data.append(("Driver version", get_nvidia_driver_version(_run))) - except Exception: - pass - msg = " - invalid!" if not (CUDA_HOME and os.path.isdir(CUDA_HOME)) else "" - data.append(("CUDA_HOME", str(CUDA_HOME) + msg)) - - cuda_arch_list = os.environ.get("TORCH_CUDA_ARCH_LIST", None) - if cuda_arch_list: - data.append(("TORCH_CUDA_ARCH_LIST", cuda_arch_list)) - data.append(("Pillow", PIL.__version__)) - - try: - data.append( - ( - "torchvision", - str(torchvision.__version__) + " @" + os.path.dirname(torchvision.__file__), - ) - ) - if has_cuda: - try: - torchvision_C = importlib.util.find_spec("torchvision._C").origin - msg = detect_compute_compatibility(CUDA_HOME, torchvision_C) - data.append(("torchvision arch flags", msg)) - except (ImportError, AttributeError): - data.append(("torchvision._C", "Not found")) - except AttributeError: - data.append(("torchvision", "unknown")) - - try: - import fvcore - - data.append(("fvcore", fvcore.__version__)) - except (ImportError, AttributeError): - pass - - try: - import iopath - - data.append(("iopath", iopath.__version__)) - except (ImportError, AttributeError): - pass - - try: - import cv2 - - data.append(("cv2", cv2.__version__)) - except (ImportError, AttributeError): - data.append(("cv2", "Not found")) - env_str = tabulate(data) + "\n" - env_str += collect_torch_env() - return env_str - - -def test_nccl_ops(): - num_gpu = torch.cuda.device_count() - if os.access("/tmp", os.W_OK): - import torch.multiprocessing as mp - - dist_url = "file:///tmp/nccl_tmp_file" - print("Testing NCCL connectivity ... this should not hang.") - mp.spawn(_test_nccl_worker, nprocs=num_gpu, args=(num_gpu, dist_url), daemon=False) - print("NCCL succeeded.") - - -def _test_nccl_worker(rank, num_gpu, dist_url): - import torch.distributed as dist - - dist.init_process_group(backend="NCCL", init_method=dist_url, rank=rank, world_size=num_gpu) - dist.barrier(device_ids=[rank]) - - -if __name__ == "__main__": - try: - from detectron2.utils.collect_env import collect_env_info as f - - print(f()) - except ImportError: - print(collect_env_info()) - - if torch.cuda.is_available(): - num_gpu = torch.cuda.device_count() - for k in range(num_gpu): - device = f"cuda:{k}" - try: - x = torch.tensor([1, 2.0], dtype=torch.float32) - x = x.to(device) - except Exception as e: - print( - f"Unable to copy tensor to device={device}: {e}. " - "Your CUDA environment is broken." - ) - if num_gpu > 1: - test_nccl_ops() diff --git a/spaces/yufiofficial/MusicGenQ/audiocraft/models/__init__.py b/spaces/yufiofficial/MusicGenQ/audiocraft/models/__init__.py deleted file mode 100644 index 92c7a48a200eba455044cd66e0d2c1efe6494f5c..0000000000000000000000000000000000000000 --- a/spaces/yufiofficial/MusicGenQ/audiocraft/models/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .musicgen import MusicGen -from .lm import LMModel -from .encodec import CompressionModel, EncodecModel diff --git a/spaces/zadkiel04/rvc-yoshino/README.md b/spaces/zadkiel04/rvc-yoshino/README.md deleted file mode 100644 index f077cd85340c26ebfcb0857816d0f1f511408242..0000000000000000000000000000000000000000 --- a/spaces/zadkiel04/rvc-yoshino/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Rvc Models -emoji: 🎤 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ardha27/rvc-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zeno-ml/langchain-qa/Dockerfile b/spaces/zeno-ml/langchain-qa/Dockerfile deleted file mode 100644 index b5a55ae3b5928ddc6dca732a5adb4b758c5e1512..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/langchain-qa/Dockerfile +++ /dev/null @@ -1,22 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile - -FROM python:3.8 - -RUN useradd -m -u 1000 user -USER user -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH -WORKDIR $HOME/app -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app -ADD --chown=user ./.zeno_cache $HOME/app/.zeno_cache -RUN chown user:user -R $HOME/app - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - - -CMD ["zeno", "config.toml"] \ No newline at end of file diff --git a/spaces/zeno-ml/translation-report/gpt-MT/evaluation/testset/wmt-testset/csen/test.cs-en.cs b/spaces/zeno-ml/translation-report/gpt-MT/evaluation/testset/wmt-testset/csen/test.cs-en.cs deleted file mode 100644 index 5f6e73a73c87ac479313479da200ba52f6cade56..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/translation-report/gpt-MT/evaluation/testset/wmt-testset/csen/test.cs-en.cs +++ /dev/null @@ -1,1448 +0,0 @@ -Blíží se velký návrat českého smolaře. -Pavel Francouz byl povolán do NHL -Český hokejový brankář Pavel Francouz, který si v posledních měsících prochází těžkým obdobím své kariéry, se vrací zpátky do NHL. -Jednatřicetiletý plzeňský rodák bude na střídačce a brzy by se tak mohl dostat i do branky. -Bývalý brankář Litvínova, Plzně či ruského Čeljabinsku se zranil v přípravě na NHL v říjnu letošního roku. -Přesně v polovině utkání s Vegas střídal a od této doby se na kluzištích NHL dosud nepředstavil. -K incidentu došlo v momentě, kdy se zkušený brankář přesouval od jedné tyčky ke druhé. -„Pavel Francouz bude mimo přibližně tři až čtyři týdny se zraněním ve spodní části těla,“ oznámil celek z Denveru na začátku října. -Jeho návrat do branky v NHL se nakonec prodloužil na více jak dva měsíce. -V neděli dopoledne amerického času byl povolán z farmy, kde odchytal čtyři duely a ukázal svou starou formu. -V AHL si totiž připsal 94,5 % zákroků. -Odchovanec plzeňského hokeje se chce konečně prosadit a potvrdit, že do nejlepší ligy světa patří. -V posledním ročníku měl problémy s kyčlemi a ve zkráceném pandemickém ročníku neodchytal jediné utkání. -V NHL „Francík“ odchytal 36 utkání, jeho úspěšnost zákroků je na čísle 92,3 %. -Charles na vánočním přání nasazuje Camille roušku, William a Kate pózují v Jordánsku -Britský princ William a jeho manželka Kate jako letošní vánoční přání vybrali rodinnou fotografii pořízenou během cesty do Jordánska. -Své přání zveřejnil také princ Charles, který použil fotografii, na níž své choti Camille na dostizích pomáhá nasadit roušku. -Na svých internetových stránkách o tom informovala britská zpravodajská stanice BBC. -Přání posílají přátelům, spolupracovníkům a nadacím, se kterými spolupracují. -Fotografie vznikla kdesi v pouštní krajině. -Vévodkyně z Cambridge je oblečená v dlouhých letních šatech v khaki barvě a šaty má i princezna Charlotte. -Princ William, vévoda z Cambridge, má na sobě stejně jako princové George a Louis šortky a tričko s límečkem. -Kdo fotografii pořídil, William a Kate neuvedli, není ani jasné, kdy přesně vznikla. -Loni královská rodina pro fotografii určenou jako vánoční přání pózovala na balíku slámy před hromadou dřeva ve svém venkovském sídle v hrabství Norfolk. -Snímek, který poslouží jako vánoční přání, zveřejnili také následník trůnu princ Charles s manželkou Camillou. -Fotograf Sam Hussein je zachytil v červnu na dostizích v Ascotu. -Charles, který má na hlavě cylindr a na obličeji roušku, pomáhá Camille nasadit její roušku barevně sladěnou se světlými šaty. -Slavia podle efotbal.cz slíbila Berbrovi milion za titul, Tvrdík to popřel. -Praha – Kriminalisté v současné korupční kauze údajně pracovali s tím, že obviněný bývalý místopředseda Fotbalové asociace ČR Roman Berbr měl mít od pražské Slavie slíbený milion korun za ligový titul v sezoně 2018/19. -Informoval o tom server efotbal.cz s tím, že se dostal k části policejních spisů. -Předseda představenstva vršovického klubu Jaroslav Tvrdík uvedl, že se červenobílí žádného korupčního jednání nedopustili. -Server zveřejnil přepis policejních odposlechů, v nichž ze Slavie figuruje hlavně její někdejší sportovní ředitel Jan Nezmar, který vloni v létě v mistrovském klubu skončil. -Někdejší vlivný funkcionář červenobílých byl podle spisu v častém kontaktu jak s Berbrem, tak s bývalým sportovním ředitelem tehdy druholigového Vyšehradu Romanem Rogozem, jenž je v kauze rovněž mezi obviněnými. -Kriminalisté údajně pracovali s informací, že Slavia Berbrovi slíbila finanční odměnu za to, když získá titul. -2019 vyhrál tým SK Slavia Praha titul v první lize. -Policejní orgán disponoval s poznatkem, že za výhru v lize má mít Roman (Berbr) od funkcionářů SK Slavia Praha slíbený milionový úplatek, citoval server ze spisu. -O den později se podle kriminalistů sešel Berbr nejen s Nezmarem, ale také s předsedou představenstva Slavie Tvrdíkem. -Ze spisu už podle serveru dále nevyplývá, zda se policie touto informací stále zabývá. -Tvrdík jakékoli korupční jednání odmítl. -V letech 2015 až 2017 jsme se aktivně snažili o změnu poměrů v českém fotbale a nabídli jsme oponentní alternativu jeho rozvoje. -Nikdy jsme se nedopustili nezákonného jednání, neusilovali o ovlivnění rozhodčích v rozporu s pravidly fair play a nikomu jsme neposkytli jakékoliv finanční plnění v této souvislosti," uvedl Tvrdík pro Seznam Zprávy. -V odposleších mimo jiné Nezmar nevybíravě uráží některé bývalé hráče Slavie tmavé pleti a také pomlouvá svého někdejšího šéfa Tvrdíka. -Kauzu údajného ovlivňování zápasů prostřednictvím rozhodčích rozpoutal loni v polovině října policejní zásah na několika místech včetně pražského sídla FAČR. -Nejvýše postaveným v aféře je Berbr, který už nefiguruje v žádné z fotbalových funkcí. -V polovině ledna byl stejně jako někdejší sportovní ředitel Vyšehradu Rogoz propuštěn z vazby. -Vrtulníky, tanky a bvp jsou larping studené války. -Děla budou nová, ale v zásadě horšího typu (dělostřelci musí z pancéřované kabiny a nosit nábojky ručně bez krytí). -Auta - Toyoty hi-lux - jsou nová a dobrá -Náklaďáky a různá pancéřová vozidla - na slušné úrovni, navíc už se podařilo zbavit se Pragy V3S i u specializovaných jednotek. -Letadla: bojová- slušná, ale na konci pronájmu, dopravní - moc malá s krátkým doletem, ale moderní. -Drony - málo a jen malé typy bez bojového potenciálu -Rakety - nemáme vůbec (ale vyrábíme a vyvážení do zahraničí) -PVO: střední - studenoválečné, zastaralé; krátkého dosahu - dobré, moderní, relativně dobrý počet. -Mam takovou story. -Honitbu mam hned vedle města. -Z řeky lezli nutrie a dělali škody na plodinách, tak jsem tam byl sednou. -Když jsem přicházel, tak jsem viděl, že na druhem břehu řeky je rybář. -Nechtěl jsem dělat bordel, tak jsem si v klidu sedl a týpek si mě asi nevšiml. -Doufal jsem, že odejde, než něco vyleze, ale samozřejmě za chvílu šla liška. -Nechal jsem ji přijít na 40 metrů, než jsem se rozhodl střílet. -Chudák rybář se málem posral, mával čelovkou na všechny strany, tak jsem na něj zavolal, že to bylo na lišku. -Než jsem slezl z posedu byl fuč. -Tzn. I louka muže bejt pruser. -Na druhou stranu není to válka, to by musela bejt souhra spousty náhod aby se něco stalo, pravděpodobně bys byl vidět v termovizy, kterou ma dnes skoro každej. -Takže na viditelne místo, drahe věci si dat k noham do spacaku a měl bys byt v pohodě. -Vojtěch versus Hamáček. -Vnitro sehnalo respirátory podstatně levněji než ministerstvo zdravotnictví -Stát, zodpovědný za nákup a distribuci roušek, masek a respirátorů pro profese nejblíže koronaviru, vydal v minulých týdnech miliardy korun na jejich pořízení. -Server iRozhlas porovnal nákupy jednotlivých ministerstev a zjistil, že během jediného dne se částky na respirátor lišily až o stovky korun. -Proč se ceny tak dramaticky hýbaly? -Které úřady se chovaly hospodárně? -A proč jiné nakupovaly dráž? -Lenka Kabrhelová mluví s redaktorkou serveru iRozhlas Dominikou Kubištovou. -Vojáků a armády si vážím (asi nejsem zasažen vzpomínkami na ČSLA, které prodělaly starší generace), ale ČR nedokáže z povinné vojny benefitovat. -My ani nemáme velké sklady techniky, které by se vycvičení mohli chopit, nemáme vlastně moderní techniku ani pro stávající profesionály, navíc moderní technika je pořád složitější, takže schopnosti záložáků budou rapidně ztrácet v čase. -K tomu moderní konvenční konflikty, kam je kdo může nasadit, se budou odehrávat velice rychle, nebude čas někoho znovu cvičit. -A nakonec, záložáci/teritoriální obrana mají velký význam pro země, jako je Ukrajina, kde jde vést masovou gerilu a je to i vyslovená nutnost k odstrašení nepřítele. -Na území ČR se povede boj jedině v konfliktu takového rozsahu a intenzity, kde už gerila bude irelevantní, a nemáme k tomu ani vhodnou geografii. -hlavně nemáme individuální skill. -To ani není to nejhorší. -Nejhorší je, že půlka z nich hraje, jako kdyby ho měla. -To pak nastávaj takový situace, že koukáš jak frajer, kterej 2 minuty zpátky netrefil prázdnou bránu, najíždí do útoku sám mezi 2 nebo dokonce 3 švýcary a řikáš si "tvl a co si asi myslíš že se teď stane?". -No samozřejmě že ho voberou jak průměrnýho daňovýho poplatníka. -Ta situaci s tímhle skillem "povodit si obránce" je tak strašná, že jsem se přistih že jsem upřímně překvapenej, když vidim že náš útočník dokázal obehrát jednoho hráče soupeře. -První vlaštovky -Covidová pandemie zpomaluje, nicméně odborníci pro následující týdny nečekají nějaký zásadní zvrat. -Nápor v nemocnicích podle statistických modelů ještě nějaký čas vydrží a do pandemické rovnice přibyla, jak známo, nová neznámá: varianta omikron, která se velmi pravděpodobně šíří rychleji než v současnosti převládající Delta. -Zároveň zatím nelze s absolutní jistotou říct, zda může způsobovat závažnější průběh, nakolik proti ní pomáhá očkování nebo postinfekční imunita získaná předchozím proděláním nemoci. -Do covidové rovnice ale tento týden vstoupil také nečekaný fenomén na straně plusů: možnost léčby. -Do Česka dorazil nový lék, antivirotikum molnupiravir, který snižuje riziko těžkého průběhu onemocnění a s ním spojené hospitalizace o třetinu a je možné se jím léčit doma. -A brzy by ho měl doplnit také lék paxlovid od firmy Pfizer, který z dosavadních výsledků hlásí úspěšnost dokonce 85 procent. -První dodávky molnupiraviru do Česka ovšem kromě naděje na rozšíření portfolia nástrojů užitečných v boji s koronavirovou pandemií zvýraznily i otázku, jak je na příchozí léky zdejší administrativa připravená. -Jak bylo řečeno, první se k tuzemským pacientům dostane molnupiravir společnosti Merck. -Společnost skončila na pásce jako první i proto, že se lék začal vyvíjet dlouho před vypuknutím současné pandemie s cílem najít vhodnou léčbu pro virové onemocnění koní na jihoamerickém kontinentu. -U těhle výzkumu se často opomíjí to, že lidi na Západě (Německo, Švédsko atd.) -jsou obecně méně otevření a otevřeně nesdílejí své názory. -Zato východní Evropané, a zejména my Češi, jsme zvyklí mluvit "jak nám zobák narostl". -Viz uděláš výzkum, kde se lidí ptáš, zda mají rádi muslimy. -V Česku ti většina bez okolků řekne, že ne. -Na Západě ti řeknou, jak mají rádi migraci, jak by jim měli všichni pomoct a jak my Češi jsme rasistický hovada. -A pak jdou a volí strany jako AfD. -Bojí se cancel culture, říct tohle na veřejnosti znamená ztrátu práce a mediální lynč. -Pak to v průzkumech vypadá hezky, západ good, východ bad. -Ale zjistí skutečně, co si lidé myslí. -Ono jen ve Francii Le Penová a Zemmoura, oba mají podle průzkumu přes 20 %. -Dokonce víme, že objekty jsou po česku tři a zcela identické. -Identické z důvodu točení jednotek, aby si vojáci nemuseli znovu učit kde co je, tak jsou všechny objekty na chlup stejné. -Z jednoho je Atom Muzeum Brdy a zbylé dva jsou opuštěné. -Sranda je, že sssr nechtělo mít na vlastním územím nukleární hlavice ať už z důvodu bezpečnosti či rychlosti nasazení hlavic z důvodu západnější polohy. -V těch podzemních krytech ( v každém javoru jsou dva) se uchovávaly pouze hlavice nikoliv celé rakety jak se říká. -Pokud bylo potřeba nasadit tuto zbraň, přijela speciální jednotka, která si tuto hlavici vyzvedla a namontovala na nějaké nosné zařízení. -Kromě toho muzeiního javoru, jsou ty zbylé v dezolátním stavu. -Na druhém stupni základní školy jsme měli spolužáka Cigána, 4 roky jsme s ním byli. -Byl celkem v pohodě, dělal dobré vtípky, často se až moc projevoval, ale byl to takový náš maskot. -Bavil se s ním každý, často chodil a chtěl něco dovysvětlit, chyběl málo, chodil pravidelně, sportu se věnoval s námi, nekradl svačiny, ani telefony, chodil čistě oděn. -Jezdil i na školy v přírodě, dělal různé opičárny, ale byl v pohodě, snad nikdy žádný průser. -V osmé nebo deváté třídě, nastoupili sourozenci, cigáni do stejného ročníku, jiné třídy. -Krátce na to, zbili učitelku, řešila tam často něco policie, vyhrožovali a ohrožovali ostatní studenty.. -Osobně prodávám občas něco na inzerát (staré věci, něco co nepotřebuju atd) a prodával jsem už často Cigánům, peníze vždy měli, nesnažili se o nějaké pokusy mě natáhnout na ceně, komunikace v klidu. -Dokonce jsem prodal takto i auto, chlapík za měsíc volal, že už to na sebe přepsal.. -Sám o sobě říkám, že nejsem rasista, je mi jedno jestli je někdo bílý, černý, žlutý, modrý nebo jiný, dokud se chová tak, jak se ve slušné společnosti patří (pracuje, funguje, ženy nebije, prostě běžné chování). -Ale jak někdo přijde, natahuje ruku, vybydluje byty a domy, má kolem sebe jen nepořádek, dělá problémy.. tak je jedno jakou má barvu, ale bude mi vadit. -Nemám problém s africkými migranty, pokud se u nás zapojí, začnou podnikat, pracovat, naučí se jazyk (ne nutně, alespoň angličtinu), respektují naši kulturu. -Jestli věří v Alláha, je mi to jedno, dokud budou respektovat mé tradice a kulturu, já budu i tu jejich. -Mladá žena zemřela při havárii aut na Prachaticku -„Mladá žena utrpěla závažná mnohočetná poranění a navzdory resuscitační péči bohužel svým zraněním na místě podlehla,“ sdělila Právu mluvčí jihočeských zdravotních záchranářů Zuzana Fajtlová. -Havárii zavinil pravděpodobně řidič, který dívku vezl. -Osmnáctiletý řidič vozidla značky Peugeot jel pravděpodobně ve směru od obce Žíchovec na obec Bavorov a z dosud nezjištěných příčin vyjel v zatáčce do protisměru. -Po srážce se Škodou Octavia skončilo vozidlo Peugeot na střeše mimo vozovku, popsala nehodu mluvčí jihočeské policie Štěpánka Schwarzová. -Mladý řidič Peugotu utrpěl při nehodě velmi vážné zranění. -Jednalo se o mnohočetná poranění a zůstal ve voze zaklíněn. -Po vyproštění mu musela být poskytnuta akutní přednemocniční péče a ve stabilním stavu byl letecky transportován do s českobudějovické nemocnice, konstatovala záchranářka Fajtlová. -Doplnila, že muž z druhého vozu utrpěl lehčí poranění hrudníku a byl převezený do nemocnice. -Pro balíky ze zemí mimo EU platí nová pravidla, zákazníci o nich často nedodávají údaje -Lukáš Neuheisl objednává ze zahraničí i několikrát do měsíce. -Pořizuje si hlavně sběratelské karty. -„Obvykle to můžou být desítky dolarů, řekněme třeba od deseti dolarů výš, kde se to ještě vyplatí dovážet, zejména z Japonska, kde je pošta často zdarma,“ přibližuje sběratel. -Od října se mu objednávání malých zásilek mírně prodražilo, nově musí připočíst DPH a poště předat údaje k celnímu řízení. -Přijde mu e-mail se sdělením, že celníci očekávají příchod balíčku. -Pak stačí vyplnit údaje o zásilce, a pokud DPH nezahrnul obchodník už při prodeji, vyměří ho celní úřad z celkové částky za zásilku a za dopravu. -V případě, že si adresát nezajišťuje celní řízení sám, musí se k celkové částce přičíst také odměna dopravci. -Podle Lukáše Neuheisla ale celý proces není komplikovaný. -Zakliknu jedno nebo dvě zatrhávací políčka, vložím dvě přílohy a mám hotovo. -Pro mě je to většinou otázka pěti minut, říká Neuheisl. -Ne všechny zásilky se ale daří doručit hladce. -Kvůli novým celním pravidlům klesl na mezinárodní poště v Praze denní počet přijatých zásilek ze zahraničí z 60 tisíc na 15 tisíc. -Podle České pošty je navíc problém také to, že lidé nereagují na výzvy k dodání údajů, které jsou potřebné k dokončení celního řízení. -Aktuálně je na mezinárodní poště 30 tisíc zásilek, které musíme zpracovat. -Kdyby lidé vyplňovali všechny údaje, které jsou potřeba, a vyplňovali by je včas, tak bychom zde byli zhruba na polovině, uvádí mluvčí České pošty Matyáš Vitík. -Řešení inflace -Viz titulek, jak byste vy navrhovali řešení aktuální inflace? -Aktuálně jsme na 9,9% inflace a očekává se její další růst. -Co by měl stát podle vás dělat, aby tento růst zpomalil, nebo kompenzoval? -Vidíme třeba v Polsku snížení DPH na potraviny a PHM, je tohle za vás cesta? -Co si myslíte, že se stane, co je nevyhnutelné s tím, kam to spěje? -Ceny rostou rychleji než platy, je podle mě nevyhnutelné, že lidé si nebudou moci dovolit běžné věci, hlavně energie. -Kolik jste dostali třeba přidáno (kdo je zaměstnanec)? -Mě letos přidali 2% v hrubém, což je výsměch, ale naštěstí mám podobný příjem z podnikání, které při zaměstnání provozuji. -Byl by mi někdo schopný vysvětlit, proč se prokazatelně usvědčení násilníci posílají za mříže třeba jenom na 6 měsíců? -Tohle mi hlava prostě nebere, jak může soud poslat jen na 6 měsíců takové zvíře za mříže jenom proto, aby to udělal/a znovu hned, jak se dostane ven. -6 Měsíců je nic v porovnáním s tím, že jeho/její oběť bude mít trauma na několik let, negativně to ovlivní její sexuální vztahy a vztahy celkově. -A to nemluvě o tom, že oběť znásilnění se z toho už taky nikdy vzpamatovat nemusí. -Neodradí tohle i potencionální budoucí nahlášení znásilnění obětí? -Muž z Hrobu „propálil“ svou přítelkyni z Kostomlat -Muž z Hrobu nechtěně pomohl strážníkům zajistit jeho celostátně hledanou přítelkyni z Kostomlat, na kterou byl vydán příkaz k zatčení. -Sám je k ní totiž přivolal. -Šel na to ale oklikou. -Nejprve oslovil kolemjdoucího a vymyslel si historku, že byl okraden. -Strážníci po zavolání na tísňovou linku dorazili na místo a nestačili se divit, když jim údajný „okradený“ sdělil, že si vše vymyslel, aby strážníci na místo přijeli. -Ve skutečnosti chtěl od strážníků jen poradit, jakým způsobem má podat oznámení na Policii ČR. -Při kontrole totožnosti muže a jeho přítelkyně pak strážníci zjistili, že žena je na seznamu hledaných osob, v celostátním pátrání, a je na ni vydán příkaz k zatčení. -Případ proto řeší PČR. -Upřímná otázka pro lidi tady, považujete naši zemi za slovanskou? -Já jsem osobně toho názoru, že slované už dávno etnicky ani kulturně nejsme, ale zajímal by mě vás názor. -Jinak s memem samozřejmě souhlasím, škoda že se Churchillovi nepovedlo zajistit osvobození Prahy USA :') -Samozřejmě nepopírám, že máme slovanský jazyk. -No nevím, je docela otázka, zda může něčemu racionální člověk skutečně věřit zcela bez důkazů jen proto, že by to potenciálně mohlo přinést prospěch. -Takový případ bych osobně za pravou víru nepovažoval. -Tady s Pascalem souhlasit nemohu, na světě existuje odhadem, pokud se nepletu, něco kolem deseti tisíc různých náboženství. -Kterého boha či bohy si pak má člověk vybrat? -Řekl bych, že je vcelku pravděpodobné, že v některém z těch tisíců náboženství existuje alespoň jeden bůh, který vás nepěkně ztrestá, budete-li věřit v boha jiného. -Však i v desateru stojí, že není jiného boha než Jahveho. -Nebylo by tedy v takovém případě racionálnější zdržet se víry v jakéhokoli boha, než riskovat, že si z těch tisíců bohů vyberu špatně a ten jeden skutečný bůh, kterého jsem zrovna netrefil, mě za to pošle do pekel či na nějaké podobné místo? -Jiné: Dobrovolný výcvik s následným zařazením do rezerv. -Švýcarský model je tuším podobný. -X měsíců tréninku (v různých specializacích, X měsíců na jednu) a pod velením profíků s praktickou zkušeností. -Pokud by si člověk vedl dobře tak může dostat profi nabídku. -Všem složkám AČR by se něco takového hodilo. -Dalo by se to dělat ve spolupráci s Univerzitou Obrany. -Můžeme hovořit takto: Neustále se omílá spolupráce mezi vzdělávacím sektorem a průmyslem, firmy loví ve školách a probíhá jakési prolínání, kdy pracovní síla migruje z vzdělávacích institucí do pracovních poměrů. -Nejen při dospívání, ale tento proces probíhá neustále, každý z nás se neustále učí něco nového, přejde z jednoho oboru do druhého atd. -Podobné prolínání by mělo fungovat i mezi civilním a vojenským sektorem. -Vnímám to i jako cestu jak budovat jakýsi vztah mezi občanem a armádou, tedy institucí, která zaručuje, že sem už nikdy nevtrhne Rus, Němec nebo třeba Mongolský nájezdník. -​Přijde mi vtipné jak považujete NATO za něco vytesaného do kamene, máme spojence a ti nás tedy budou bránit pokud dojde k průseru. -No potěš pánbu. -Stačí jedny volby v USA který zkrouhnou jejich budget a celý NATO jde do píči. -Angláni nás vymění za Ruský prachy, Němci nás vymění za Ruskej plyn a Poláci se už jednou ukázali že jim stačí ukázat záda a vezmou si co chtějí. -Jediný co dlouhodobě funguje jako garant samostatnosti je po zuby ozbrojená armáda a obyvatelstvo co umí ovládat vojenskou techniku své doby. -A dneska umí pilotovat dálkově ovládaný prostředky každej teenager, tak co by to nešlo. -Nepotřebujeme pohraniční opevnění, to je dneska k prdu, ale uhrovatej teenager za kniplem dálkové ovládanýho zařízení si poradí. -Jak se neutopit v krabicovém tsunami -Rozbalíte dárky pod vánočním stromečkem a najednou jste doma zavalení krabicemi a výplněmi? -Tento „odpad“ znovu využívají e-shopy, kterým obalové materiály chybí. -Vznikla proto mapa obchodů, které vaše použité krabice uvítají. -A to nejen o vánocích. -Všechny obalové materiály jsou navrhované tak, aby vydržely opakovanou manipulaci. -Proto je škoda s nimi zacházet jako s jednorázovým odpadem. -Kdokoliv může po dohodě přinést do zapojeného obchodu (mapa projektu KAMsNIM.cz jich obsahuje bezmála 150) kartonové krabice, plastové nebo papírové výplně. -Tím podpoří malé firmy, sníží množství vytvořeného odpadu a také se vyhne přeplněným modrým kontejnerům. -Samotné obchody vítají obaly, kterých je momentálně na trhu nedostatek a také ušetřené peníze, neboť obalový karton za poslední období zdražil o 50 %. -V očích ekologicky smýšlejících zákazníků tím navíc posiluji svoji značku. -Jedním z takových obchodů je i TIERRA VERDE, výrobce eko drogerie a eko kosmetiky. -Krabice a výplňový materiál nám do Popůvek u Brna vozí jednotlivci, kterým se doma nahromadí, ozývají se nám však i firmy, s nimiž máme domluvené pravidelné svozy odložených kartonů. -Vše využijeme při balení zásilek z našeho e-shopu. -Díky jednotlivcům i firmám společně tvoříme ohleduplnější svět. -Naším přáním je zachovat zdroje a krásy přírody dalším generacím, říká Petra Lopušníková z Tierry. -Aplikace www.KAMsNIM.cz však ukazuje nejen odběrná místa na obalový materiál. -Slouží jako vyhledávač, pokud se potřebujete čehokoliv zbavit (kam odnést vytříděný odpad z domácnosti, kde odevzdat prošlé léky, pneumatiky, vysloužilé elektro, baterie, světelné zdroje, velkoobjemový odpad atd.). -Veškeré odpadky tak mohou skončit na správném místě, navíc znovupoužitelné věci najdou svůj druhý domov. -Celkem mapa projektu obsahuje už přes 100 000 takových míst. -„Postupně v ní přibývají sběrné dvory, re-use centra, kontejnery na textil, potravinové banky, charitativní obchůdky, SWAPy a další místa, která pomáhají najít využití pro věci, které by se jinak zbytečně staly odpadem“, dodává Miroslav Kubásek, jeden z autorů aplikace ze spolku Ukliďme Česko. -Spíš mi přijde špatně, že v dnešní době je technika tak jednoduchá a blbuvzdorná, že děti, co používají počítač nebo telefon, na něm hrají hry, ale nenaučí se při tom základní počítačové dovednosti. -Nedávno se objevil problém (hlavně v anglických článcích) že studenti vysokých škol nechápou princip složek na počítači. -Protože třeba Google Fotky nebo Apple aplikace na fotky nebo vlastně mobilní telefony obecně prostě skryjí ten underlying filesystem se složkami a vše nahází na jednu obrazovku v aplikaci. -Klidně ať používají techniku od dětství, ale hlavně ať se něco naučí. -Přepišme dějiny, vážně -Emmanuel Macron před víkendem představil priority francouzského předsednictví EU – začíná v lednu – a bylo to velkolepé. -Macron mluvil přes hodinu, během které odhalil logo předsednictví, vyzval k ochraně Evropanů – v práci, na ulici – a zmínil tolik akcí, že se za půl roku nedají stihnout. -Francouzští politici to tak ale mají rádi a mají to rádi i voliči. -Macronovi podporovatelé rozkročení mezi pravici a levici se shodnou na máločem, ale zrovna na Evropě ano. -A ve Francii budou v dubnu nové volby hlavy státu. -Volební kalendář ovlivnil i samy priority. -Francouzský lídr mimo jiné zmínil, že by historici měli napsat „jedny dějiny Evropy“ a Francie je připravená pro takovou práci historiků vytvořit podmínky. -Řada komentátorů hned přispěchala s kritikou, že Macronovi jde o proevropskou propagandu a přepisuje historii. -Ve skutečnosti se tím snaží spíš přepisování dějin bránit. -Krajně pravicový kandidát na francouzského prezidenta Éric Zemmour právě teď objíždí Francii s tezí, že režim z Vichy, který za druhé světové války kolaboroval s Hitlerem, nebyl tak špatný, a má s tím u Francouzů docela úspěch. -Zkusme vzít Macronův nápad na jednu učebnici dějepisu vážně a nehleďme na dění ve Francii. -Nebyla by zapotřebí? -Studenti v evropských zemích se zhusta učí historii jako příběh my versus oni a nikdy ne jako příběh celku. -Španělé, Francouzi, Češi se dozvídají, kdo koho porazil ve které bitvě. -Pokud ale nemají osvíceného kantora, už se nedozvědí, jaký byl širší kontext události. -Film roku je Quo vadis, Aida? -České "Myši" nevyhrály. -Příběh, jenž se vrací k masakru ve Srebrenici roku 1995, získal také ocenění za režii a za nejlepší herečku Jasnu Duričičovou. -Na letošním karlovarském festivalu se držel na prvních příčkách diváckého žebříčku. -Nejlepším hercem se v Berlíně stal Anthony Hopkins za film Otec. -Nejsem moc mladej, nejsem moc zdravej/fit a nejsem očkovanej. -Bylo to cca jako "mít chřipku/být nachcípanej" Pár dní jsem měl průjem a neměl moc chuť kouřit... -Oproti běžný chřipce to bylo horší. -U chřipky nemám průjem. -(Pouze osobní zkušenost. Netvrdim, že to tak maj všichni) -Vánoční knižní tipy -Vánoční dvojčíslo, které vyjde 20. prosince, bude obsahovat tradiční literární přílohu. -A spolu s ní vyjdou i kulturní tipy. -Ty knižní vám, předplatitelům, přikládáme navíc už k tomuto digitálnímu vydání, abyste měli případně dostatek času pořídit knihy jako vánoční dárky. -Prozaické texty, které navazují na předcházející obdobnou sbírku Petříček Sellier & Petříček Bellot. -Další porce pozorování světa a popisu všedních věcí s nevšední básnickou všímavostí, hloubkou a atmosférou. -Ve své druhé próze se fotograf Šesták pokusil zachytit esenci maloměsta i české společnosti. -Vyprávění o návratu ke kořenům, který se ukáže být jen vytouženou iluzí. -Bohemistka a komparatistka přenáší pohádku o Červené karkulce do kulis současné vesnice. -Její podání brutalitou překonává lidové verze a graduje v horor o emoční prázdnotě. -A o tom, že cesta zpět k instinktům je kratší, než si je člověk ochoten připustit -Autor ve svém předposledním románu vypráví o dost méně sentimentální příběh o návratu z emigrace, než jsme si navykli poslouchat. -Ti, kteří zůstali, a ti, kteří odešli, toho o sobě vědí příliš málo, aby to vydalo na společný život. -Vlaky začínají jezdit podle nového jízdního řádu, někde se změní dopravci -Od neděle začínají na železnici jezdit vlaky podle nového jízdního řádu. -Největší změnou je výměna dopravců na některých tratích, například mezi Ústím nad Labem a Kolínem, kde místo Českých drah začíná jezdit RegioJet. -U většiny linek se pouze upraví čas odjezdu, případně mírně i jejich trasa. -Na kolejích budou také desítky nových vlaků. -S prodejem jízdenek začali dopravci už během podzimu. -České dráhy plánují v novém jízdním řádu vypravovat denně v průměru 6783 spojů osobní dopravy, z toho bude denně v průměru 478 dálkových vlaků. -Vlaky ujedou během nového jízdního řádu přibližně 118 milionů kilometrů. -Dráhy vedle vnitrostátních spojů v rámci nového jízdního řádu budou jezdit také do Německa, Polska, na Slovensko, do Maďarska, Rakouska a do Švýcarska. -Firma spolu s novým jízdním řádem nasadí desítky nových vlaků. -Hlavní novinkou budou vlaky InterJet, které budou jezdit na linkách z Prahy do Chebu. -Další nové vlaky dopravce vypraví na severní Moravě či v západních Čechách. -Dopravce také od příštího roku tradičně navýší ceny jízdného, a to v průměru o 3,2 procenta. -Dráhy zohledňují inflaci do svých tarifů každoročně. -Největší změnou v jízdním řádu RegioJetu je vstup na linku R23 Ústí nad Labem – Mělník – Nymburk – Kolín. -Dopravce zde po úspěchu v soutěži ministerstva dopravy nahradí České dráhy. -Denně bude RegioJet na trati vypravovat celkem 16 spojů, osm v každém směru. -Další změny se týkají dálkových spojů mezi Prahou a Brnem, které od neděle budou zastavovat také ve stanicích Havlíčkův Brod, Žďár nad Sázavou a Kolín. -Leo Express zachoval svých 16 spojů, dva zpáteční spoje na Slovensko a také víkendové spojení do Krakova. -Podle mluvčího Emila Sedlaříka se dopravce snažil i přes plánované výlukové práce zachovat také co nejpodobnější jízdní časy svých dálkových vlaků. -Bez velkých změn by měl pokračovat i provoz vlaků Arrivy a dalších dopravců. -Dopravce se vymění i v některých regionálních tratích. -Změny čekají cestující například na Českolipsku, kde na trati z Mladé Boleslavi přes Českou Lípu do Rumburku budou místo Českých drah jezdit vlaky Trilex německé společnosti Die Länderbahn. -Cestující budou moci druhým rokem také využívat jednotné jízdné na železnici. -Stejně jako u Českých drah se jejich cena navýší o inflačních 3,2 procenta. -Musím nesouhlasit. -Neučíme se snad pohled druhé strany? -Všude slyšíme, jak moc museli bojovat za svá práva, jak byli utlačovaní a museli dřít, umírali. -V životě jsem neslyšel vyučování z pohledu otrokářské strany nebo z té doby, nikdo tohle neobhajuje, jen odsuzuje. -Nikdo ti ani neřekne ve školách, že černochy často prodávali do otroctví sami černoši a že oni byli často ti nejhorší otrokáři. -Nikdo tě ve škole nenaučí, že kolonizátoři často půdu od Indiánů kupovali, všude ti jen řeknou, jak brutálně jsme je my, Evropani, vyvraždili, přitom se vraždili dávno mezi sebou. -Nějakou dobu jsem taky strávil v USA, přímo na školách, jak v severnějších školách, tak i v jižanské. -Nesetkal jsem se s tím, že by někdo záměrně zamlčoval fakta, ale slyšel jsem už dříve, že se to děje a podle mě je to problém, to nepopírám (třeba v Japonsku jsou zvěrstva z WW2 docela tabu). -Moje pointa spíš byla, že historie není černobílá a že máme tendence se na ni dívat z dnešního pohledu, bez pochopení. -Historii nezajímají něčí pocity, je prostě jaká je a myslím si, že je fatální chyba odsuzovat bez pohledu na věc z tehdejší doby. -Na druhou stranu bychom se z ní měli poučit a tohle už nikdy neopakovat. -Mimochodem když už jsme u těch jižanských států, ano, konfederační vlajka a slavní otrokáři tam jsou docela populární, na druhou stranu i oni měli nějaké své dobré úspěchy a přišlo mi absurdní je zavrhovat. -Navíc Sever nebyl o moc lepší, jak si ho v dnešní době dost lidí idealizuje. -A spousta lidí taky zapomíná, že ne všichni na Jihu byli otrokáři a dost věcí se jim taky protivilo. -Tohle bych s Rusy nesrovnával, záměrně vynechávají některá fakta, lžou a manipulují, navíc třeba náš pohled u nich neexistuje (na YT bylo i video z televize, kde vypli někoho, kdo začal mluvit o naších legionářích a 1968). -Co mi přišlo na US školách směšné byl ten vzestup marxismu a idealizování komunismu, něco co nikdy jejich zem nezažila. -Celkově mi přišlo, že na některých univerzitách to bylo otřesné, ti studenti byli dost zradikalizovaní a ty školy je v tom kolikrát podporovaly. -A když si představím, že tihle lidi budou jednou mnohem starší, tak je mi trochu zle z toho, že by tohle mohl být hlas většiny, protože mezi mladými je a i ve vládnoucí elitě. -Přijde mi, že třeba feminismus už dávno dosáhnul toho, čeho měl a už to není o tom samém, zradikalizoval se. -V současnosti se jako feministky označují ti, kteří s ním nemají nic společného a ignorují základní biologická fakt, tak samo další skupiny jako LGBT a vede to k radikalizaci i na opačné straně, kdy to často vede k odporu i u docela rozumných věcí. -Navíc čím víc je někdo radikálnější, tím víc je ho slyšet. -Každopádně na závěr, taky jsem se nesetkal s tím, že by mě někdo odsuzoval za kolonialismus nebo otrokářství. -Spíš jsem se setkal se špatným zeměpisem, to bylo ale oboustranné :D -Ne proto, že by se mi tu nelíbilo, ale proto, že to je podle mě úplně bezpředmětné. -Mám být hrdý na něco, čeho jsem nedokázal sám? -Navíc koncept národnosti celkově považuji za zbytečný z hlediska nějaké osobní identity. -Pokud mě něco s lidmi pojí tak zájmy, pohledy na svět a společné zážitky, ne místo, kde jsme se narodili. -Já nejsem věřící, ale z toho, co vím, ti můžu říct toto: Máme tu dvě řeckokatolický farnosti, jedna je ukrajinská a druhá slovenská. -Ten slovenský farář je strašně fajn chlap, má kázání spíš o teologii než o politice, ale pak vždycky zajebe nějakou kokotinu o koronaviru, že se všichni stydí. -Pak je tady samozřejmě Církev československá husitská. -Oficiálně jsou to protestanti, ale ve skutečnosti se zrodili z katolický moderny a jsou fakticky katolící bez papeže. -Znám hodně lidí, kteří jsou katolíci, ale chodí na bohoslužby k husitům, protože je to teologicky hodně podobný, ale členové jsou většinou liberálnější. -Mají krásný a historicky velmi cenný funkcionalistický chrám na ulici Botanická. -Jinak kostel sv. Michala na Dominikánském náměstí patří pod dominikány a dokonce tam každou neděli v 15:00 dělají mši v latině, jako se dělala před Vatikánem II. -Plamínek z Betléma je v Česku, skauti ho převzali z Rakouska -Břeclav – Plamínek zažehnutý v Betlémě, kde se podle křesťanské tradice narodil Ježíš Kristus, je v České republice. -Skauti si pro něj kvůli pandemii koronaviru ale ani tentokrát nejeli do Vídně, ráno jej od svých rakouských kolegů převzali na hraničním přechodu Reintal – Břeclav. -Na hranici jej přebírali i loni. -Betlémské světlo je krásná vánoční tradice, které se s oddílem každoročně účastníme, těším se na to moc. -Je to pro mne čest, že jsem byla vybrána, řekla přítomným novinářům skautka Amálie Budíková. -Zatímco loni se předání uskutečnilo na hraničním přechodu Mikulov-Drasenhofen, a to přímo na hraničním mostě, nyní na přechodu Reintal – Břeclav na parkovišti. -Obvykle pro něj ale skauti jezdí vlakem do Vídně. -Na rozvozu plamínku po České republice se nic nemění. -S betlémským světlem vyrazili skauti tradičně vlakem nejprve do Brna, kde jej předají diecéznímu biskupovi Vojtěchu Cikrlemu. -O následný rozvoz světla se v sobotu 18. prosince postarají skautští kurýři, kteří budou cestovat vybranými rychlíky i osobními vlaky. -Od nich si světlo převezmou ve stanicích místní skauti nebo dobrovolníci, kteří pak budou dál šířit plamínek po Česku i tam, kam koleje nevedou. -I letos musí skauti dodržovat platná opatření proti šíření koronaviru. -Probíhá to podobně jako loni. -Dáváme doporučení jak těm kurýrním týmům, tak organizátorům místních akcí, aby samozřejmě nosili roušky, snažili se dodržovat rozestupy, bylo jich co nejméně, nezpívat kolegy, prostě se chovat tak, aby to bylo co nejbezpečnější, popsala mluvčí akce betlémského světla Zuzana Hrbková. -Tradice betlémského světla, které putuje Evropou, se zrodila v roce 1986 v Rakousku. -Cílem je šířit zároveň s plamínkem myšlenku pokoje, přátelství a míru. -Betlémské světlo je pro věřící symbolem naděje, světla, jež překonává tmu. -V Česku se o jeho šíření starají skauti a skautky už víc než 30 let. -Akce stojí na stovkách dobrovolníků, plamínek je tak rovněž symbolem nezištnosti a lidské vzájemnosti. -Všechny aktuality, včetně seznamu míst, kam si lidé mohou pro plamínek přijít, lze najít na webových stránkách www.betlemskesvetlo.cz. -Nemám ekonomický vzdělání, takže neznám základy ekonomie který potvrzujou že dotace sou rakovina ekonomiky, ale nemyslím si že dotace jako celek by byl problém. -Rozvoj infrastruktury, ekologie(např. zadržování vody), zdravotnictví i školství ty peníze náležitě zužitkuje, akorát nechápu proč se ty peníze dávaj do zemědělství, průmyslu a celkově firmám. -Jak je zmíněno - produkuje se zbytečnej produkt a nabourává to volnej trh a “přirozenej život firmy”. -Sám pracuju ve fabrice kde je na chodbách milion vývěsek “xy financováno/spolufinancováno projektem xy” a taková firma je prostě jenom uměle držená při životě. -Tohle neni podpora firmy která dává práci x lidem, tohle je brždění vývoje kdy tahle firma se jak tak drží a ubírá zakázky/zaměstnance firmám, který by se po jejím zániku mohly rozrůst a bejt produktivnější -Naprostý souhlas, je to hrozný. -Občas i člověk, který se v internetové době narodil, se taky chytne do nějakého triku či pasti - zejména reklamy. -Sám si myslím, že internetové reklamy se mnou nehnou, ale pak se stejně ale nachytám, že i mě ovlivnily - je to už prostě tak vymakané, že se tomu člověk vždy neubrání. -I z toho důvodu podporuji radikální hlasy v Evropském parlamentu, které momentálně chtějí dát naprostý zákaz programmatickým (= cíleným) reklamám... -Je to všechno svinstvo, slovy klasičky - já bych ty internety zakázal. -Ja mam pocit ze tato vira ma kořeny v (ale převážně v bodu 1): -1. "nebudu věřit něčemu co věří většina a dává to smysl, nejsem přeci ovce, ale radši budu věřit něčemu co je méně pravděpodobné, nedává to moc smysl, ale je důležité že budu mít vlastní originální názor o kterém budu tvrdit že je to krytické myšlení" -2. "přece nebudu věřit všemu co říkají media" -3. "nevěřím politikům" -Televize propadly trendu vánočních filmů, premiéru jich letos mají dvě stovky -Los Angeles – Kina, televizní stanice a streamovací platformy ve Spojených státech a dalších anglicky mluvících zemích propadly trendu vánočních filmů a letos jich svým divákům v premiéře představí rekordní více než dvě stovky. -Spočítal to provozovatel filmové databáze IMDb. -Žánr vánočních rodinných a romantických filmů v posledních letech u publika boduje a výrazně zvyšuje sledovanost, proto těchto snímků vzniká čím dál více. -Letos se natočilo čtyřikrát více vánočních filmů než v roce 2011 a dvakrát více než před pěti lety. -Databáze IMDb přitom do svého součtu zahrnula jen ty snímky, které mají slovo Christmas (Vánoce) ve svém názvu, takže reálně bude svátečních filmů mnohem více. -Filmy, které si lidé tradičně spojují s Vánocemi, existovaly vždy. -V Česku jsou s tímto obdobím spojené zejména pohádky, celosvětově jsou populární třeba také snímky Sám doma, Láska nebeská nebo klasický vánoční příběh Život je krásný z roku 1946. -Pravý boom vánočních snímků ale odstartoval až v roce 2009, kdy se speciální filmovou sérií přišla americká kabelová televizní stanice Hallmark, připomněl server BBC. -Její adventní projekt nazvaný Countdown to Christmas (Odpočítávání do Vánoc) tehdy zahrnoval čtyři filmy a byl velice úspěšný. -Letos tato stanice začala své diváky na Vánoce ladit už 22. října a představí celkem 42 vánočních snímků. -Konkurenční stanice Lifetime má letos na programu 35 nových filmů s vánoční tematikou a do celkového součtu přispívají i oblíbené streamovací platformy jako například Netflix. -„V tomhle kouzelném období na příběhu zase až tolik nezáleží, důležité je, že v pozadí je spousta vánočních stromečků a že sněží,“ popsal s nadsázkou tento žánr Brandon Gray, autor knihy o vánočních filmech nazvané I'll Be Home for Christmas Movies (Na vánoční filmy budu doma). -„Pro diváky je to forma úniku a způsob, jak aspoň na dvě hodiny cítit trochu klidu uprostřed všeho toho svátečního šílenství a šílenství světa, ve kterém posledních pár let žijeme,“ doplnil Gray. -Podle něj například televize Hallmark používá pro své filmy stále stejný recept, který je sice uniformní, ale úspěšný. -Máte dva lidi, co se do sebe zamilují, ale pak vznikne asi půl hodiny před koncem nějaké nedorozumění, které se ovšem zdárně vyřeší a ti dva se políbí. -Takhle je to pořád dokola, a dokud všechny filmy vypadají podobně a mají podobnou atmosféru, sledují lidé jeden za druhým, doplňuje Gray. -Mazepin měl pozitivní text na covid-19, do závěrečného závodu F1 nezasáhne. -Do Velké ceny Abú Zabí ve formuli 1 vyrazí pouze devatenáct jezdců. -Nikita Mazepin měl pozitivní text na covid-19 a do posledního závodu sezony nezasáhne. -Stáj Haas tak pošle na trať jen jednu formuli. -V posledním závodě sezony měl útočit na lepší umístění z 20. příčky, kterou si vyjel v kvalifikaci. -Rus Nikita Mazepin ze stáje Haas však nakonec do Velké ceny Abú Zabí nezasáhne. -Byl totiž pozitivně testovaný na covid-19. -Na startovním roštu se tak objeví pouze devatenáct vozů, z poslední příčky vyrazí na trať Mazepinův týmový kolega Mick Schumacher, z první Max Verstappen, jenž se v přímé bitvě o titul utká s Lewisem Hamiltonem. -Mazepin je podle vyjádření stáje Haas poměrně v pořádku a nevykazuje žádné příznaky. -Nikita je fyzicky v pořádku, protože byl asymptomatický. -Nyní je v izolaci a bude se řídit pokyny příslušných orgánů veřejného zdraví, přičemž bezpečnost bude konečnou prioritou pro všechny zúčastněné strany," uvedli její zástupci pro formula1.com. -Do závodu Haas náhradního jezdce nepošle, ani nemůže. -Případný náhradník by totiž musel absolvovat kvalifikaci nebo jízdy v jiné části víkendu. -S covidem-19 se nepotýká první závodník. -Na startu právě končící sezony měl covid-19 Kimi Räikkönen, loni byli pozitivně testovaní Sergio Pérez č Lewis Hamilton. -Za tohle tě taky můžou zavřít. -A každýmu bude ukradený, že šéf říkal. -Z hlediska zákona je covid na seznamu nakažlivejch nemocí. -Tedy ve stejné skupině, jako HIV, mor, žloutenka nebo tyfus. -§ 152 Šíření nakažlivé lidské nemoci -(1) Kdo úmyslně způsobí nebo zvýší nebezpečí zavlečení nebo rozšíření nakažlivé nemoci u lidí, bude potrestán odnětím svobody na šest měsíců až tři léta, zákazem činnosti nebo propadnutím věci. -(2) Odnětím svobody na dvě léta až osm let bude pachatel potrestán, -c) poruší-li takovým činem důležitou povinnost vyplývající z jeho zaměstnání, povolání, postavení nebo funkce nebo uloženou mu podle zákona, nebo -d) způsobí-li takovým činem těžkou újmu na zdraví. -(3) Odnětím svobody na tři léta až deset let bude pachatel potrestán, způsobí-li činem uvedeným v odstavci 1 těžkou újmu na zdraví nejméně dvou osob nebo smrt. -(4) Odnětím svobody na pět až dvanáct let bude pachatel potrestán, způsobí-li činem uvedeným v odstavci 1 smrt nejméně dvou osob. -Kvíz: Proč krachující firmy řídí často ženy a co po vás vedení nesmí nikdy chtít -Mzdová nerovnost mužů a žen, tedy takzvaný gender pay gap, je v Česku dlouhodobě jedna z nejvyšších v EU. -V které zemi jsou rozdíly největší? -A v jaké věkové kategorii a v kterém odvětví si ženy oproti mužům vydělávají nejméně peněz? -Otestujte si, co vše víte o nerovném odměňování. -Zlato, stříbro a 150 diamantů: Cenovka nejdražšího svetru ohromí! -Je trochu jako přenosné klenotnictví a tvůrce do něho vložil půl roku práce a veškeré svoje úspory. -„Měl jsem vizi toho, co chci vytvořit, ale málo zkušeností, u nás doma se svetry nikdy moc nenosily,“ přiznává Liban, který během půl roku strávil nad svým dílem 3000 hodin. -Hedvábí koupil v Itálii, 24karátové zlaté nitě ve Francii a 2000 ozdobných krystalků dodala firma Swarowski. -Stříbrné hvězdy pak ozdobil 150 diamanty. -„Základem je vlna a bavlna, ale hedvábí dodá svetru hebkost,“ chválí tvůrce svoje dílo, které ovšem nedoporučuje prát. -A má to ještě jeden háček. -„Jsem úplně na mizině, musím svetr co nejrychleji prodat,“ přiznává Liban. -Pokud uspěje, vytvoří světový rekord. -Dosavadní nejdražší svetr, prodaný před pěti lety, stál „jen“ 720 000 korun. -Je-li MZ odtrženej od reality ani moc nevadí - nechá si vyletovat vadný okruh a vyměnit za nový. -Fakt je, že odchod FB z Evropy by velmi pomohl její neruské části (ta pod vlivem má holt smůlu). -Myslím, že by to celkem dost pročistilo společenské klima. -Případně by se i lépe vyjasnily kanály "sovětské bratrské pomoci" některým našim politickým stranám a představitelům. -Pak by lidi, co je volí, měli taky jasněji, o čí zájmy jim ve skutečnosti jde. -Škoda, že nevlastní i TikTok. -Spousta dospívajících lidí by najednou s velkým údivem zjistila, že i venku svítí Slunce... -Trump k týrání podezřelých přímo vyzýval, sklízí, co zasel. -O situaci v USA s předním afroamerickým reportérem. -Ve Spojených státech vycházejí najevo nové případy policejního násilí, tentokrát během zásahů při celonárodních protestech. -Demonstrace, které vypukly poté, co Afroameričana George Floyda zabil policista při zatýkání, otevřely debatu o systematickém rasismu, práci policie a případech brutality vůči americkým menšinám. -Lenka Kabrhelová mluví s jedním z předních afroamerických novinářů, reportérem časopisu The Atlantic Adamem Serwerem. -Ale čím by se zvýšilo to financování? -Unie do nás cpe peníze v dotacích. -Pokud to přestane dělat, přestaneme ty peníze mít. -Opravdu nevidím jak by to že nám unie přestane dávat peníze způsobilo to, že ty peníze využijeme na něco jiného... -Můžete tvrdit že peníze z těch dotací by se mohli použít lépe, ale to je úplně jiná diskuze. -Je vůbec možné, aby hospoda neodváděla daně? -Je vůbec možné, aby hospoda krátila daně? -Vždyť když nějaký kus masa projde veterinární kontrolou, tak přece musí být někde zaevidovanej a už se nemůže jen tak ztratit, ne? -Podobně Prazdroj a Jelínek asi nevyrábí speciální alkohol pro černý trh. -Přesto mi ale dost často někde nedávají účtenku nebo ji hned zase vezmou a vyhodí. -Vláda schválila vyslání až 150 vojáků na pomoc Polsku. -Ženisté, průzkumníci a piloti dronů by mohli vyrazit ještě před Vánoci, mise je schválená na šest měsíců. -Mají polským kolegům pomáhat s ochranou hranice s Běloruskem a se stavbou plánovaného plotu. -Polsko oficiálně požádalo o pomoc státy NATO, a to v souvislosti s několik měsíců trvajícími akcemi běloruského režimu, který na své území zve občany blízkovýchodních zemí s falešným příslibem snadného překročení unijní hranice. -Na polském území už působí britští a estonští vojáci. -Mutace Omikron se šíří na jihu Moravy? -Hygiena prověřuje další případ dítěte z Adamova. -„Aktuálně máme nahlášeno další podezření na tuto variantu u dalšího dítěte z Adamova, z přípravné třídy. -Přímý kontakt s předešlými případy ze ZŠ Adamov není prokázán,“ uvedla Ciupek. -V týdnu se v kraji objevilo šest případů. -„Na oficiální potvrzení varianty u našich šesti případů nadále čekáme – provádí ho Národní referenční laboratoř pro chřipku a nechřipkové viry Státního zdravotního ústavu v Praze,“ uvedla ředitelka. -Doplnila, že jde o dvě zdravotní sestry z jednoho pracoviště Fakultní nemocnice Brno a dvě děti jedné z nich, dále dva jedenáctileté žáky ZŠ Adamov. -Mezi případy z Brna a Adamova přitom není žádná spojitost. -Tři z nich mají podle ředitelky mírné příznaky, čtyři bezpříznakový průběh. -Nikdo s podezřením na Omikron do ciziny necestoval -Žádný z uvedených necestoval do zahraničí, ani nikdo z jejich rodin, nedošlo ani ke kontaktu s kýmkoli, kdo by v zahraničí pobýval. -Souvislost se šampionátem ve vodním pólu není u nikoho z uvedených žádná, uvedla Ciupek. -Hlavní hygienička Pavla Svrčinová dříve uvedla, že se prověřuje mezinárodní turnaj ve vodním pólu, který byl v Brně před několika týdny. -Na něm byli i hráči z Jihoafrické republiky a jeden belgický hráč po návratu onemocněl. -Kalifornie omezí prodej zbraní. -Chce postupovat jako Texas při zákazu potratů. -Guvernér Kalifornie Gavin Newsom v sobotu ohlásil plán zavést v nejlidnatějším americkém státě zákaz prodeje a výroby některých zbraní za pomoci právního mechanismu, který použil Texas ve svém kontroverzním zákoně proti potratům prováděným po detekci srdečního tepu embrya. -Lidé by pak měli nárok na odškodné při žalobě kohokoli, kdo v Kalifornii vyrábí či prodává útočné pušky a podomácku vyrobené střelné zbraně. -Oznámení Newsoma reagovalo na páteční stanovisko amerického nejvyššího soudu, který texaský zákaz potratů ponechal v platnosti, byť jde proti téměř 50 let starému precedenčnímu verdiktu, který stanovil právo na potrat plošně v celých USA zhruba do 24. měsíce těhotenství. -Soud ovšem nyní nerozhodoval o ústavnosti celého zákona, nýbrž o technické otázce, která vyplývá z inovativní konstrukce opatření. -Vymáhání zákazu bylo totiž v tomto případě přeneseno na veřejnost, čímž texaští republikáni znemožnili jeho napadení obvyklou soudní cestou. -„Jsem pobouřen včerejším (pátečním) rozhodnutím Nejvyššího soudu Spojených států, který povolil zachování texaského zákazu většiny potratových služeb a do velké míry podpořil manévr Texasu s cílem ochránit svůj zákon,“ uvedl kalifornský guvernér. -„Jestliže nyní státy mohou blokovat přezkum svých zákonů ze strany federálních soudů, pak Kalifornie tuto pravomoc použije k ochraně lidských životů,“ pokračuje Newsom. -Své podřízené prý pověřil, aby ve spolupráci se státním parlamentem a ministrem spravedlnosti pracovali na opatření, které by opravňovalo zástupce veřejnosti k vymáhání zákazu útočných pušek a takzvaných zbraní duchů. -Takto jsou označovány podomácku vyrobené zbraně, které nemají sériová čísla a které mohou sloužit k obcházení regulací. -Newsom chce, aby měli „soukromí občané“ právo vyžadovat odškodné nejméně 10 000 dolarů (přes 220 000 Kč) a soudní výlohy od kohokoli, kdo by v Kalifornii vyráběl, distribuoval či prodával útočné pušky, součástky do "zbraní duchů" nebo sady pro jejich výrobu. -„Je-li nejúčinnějším způsobem, jak tyto strašlivé zbraně udržet mimo naše ulice, vytvoření hrozby soukromých žalob, pak bychom přesně to měli udělat,“ uvedl kalifornský guvernér. -Agentura AP podotýká, že Kalifornie po desetiletí zakazovala výrobu a prodej některých zbraní armádního stylu, v červnu ale federální soudce zdejší zákaz zablokoval jakožto protiústavní. -Pakliže by nyní stát skutečně zákaz obnovil za použití texaské šablony, potvrdila by se slova liberální členky nejvyššího soudu Sonii Sotomayorové, která v nesouhlasném stanovisku k pátečnímu většinovému verdiktu varovala před rozšířením daného právního mechanismu do dalších amerických států. -Nejvyšší soud nicméně texaskému zákazu potratů nepřiznal úplnou imunitu vůči soudnímu přezkumu a umožnil potratovým klinikám pokračovat v žalobách na vybrané činitele jižního státu USA. -Každé nouzové očkování má svoji veřejnou testovací fázi, kdy se postupně přichází na to, jak bude vypadat očkovací schéma a taky dochází k vylepšování samotných vakcín, na základě výsledků. -Třeba z izraele se už ve velkém valí studie o účincích 4 dávky. -A podle těchto studii se u většiny pacientů dochází až k pětinásobnému zvýšení protilátek, což už má vámi zmíněný dlouhodobý efekt. -Prostě jako každý jiný očkování bude mít po čase svoje očkovací schéma, jen je na to zatím moc brzo. -Dalším faktem je, že za nedlouho by měla přijít na trh nová vakcína, na bázi inaktivovaného viru, která podle specifikací výrobce slibuje až 10x větší účinnost. -Množství učiva klidně ponechat. -Ale přehodnotit CO se učí. -Od dob Marie Terezie naše civilizace a technologie trošku pokročila a učení telefonních seznamů a opisování učebnic do sešitů už nedává moc smysl a je to fakt ztráta času. -V těchto věcech by šlo fakt brutálně ubrat. -Na druhou stranu kolik lidí odejde ze základky s nějakou základní finanční gramotností? -A dalšíma věcma které bude nutně k životu potřebovat? -Jak mohu legálně sledovat Ligu mistrů online? -Nevíte, jestli existuje nějaká online služba tady v ČR, která by mi umožnila sledovat Ligu mistrů za poplatek? -Doma máme Netbox a platím si sportovní balíček Telly pro španělskou a anglickou fotbalovou ligu. -Ten však nezahrnuje Ligu mistrů UEFA. -Myslím, že O2 nabízí Ligu mistrů, ale nechci měnit poskytovatele televize a internetu. -Polsko pohrozilo zastavením plateb do unijního rozpočtu -Podle Ziobra by Evropská komise jednala v rozporu s právem, pokud by využila nové pravomoci a kvůli sporu o právní stát zastavila výplatu peněz Polsku. -Komise už odložila schválení polského plánu ohledně čerpání 36 miliard eur z unijního fondu na obnovu ekonomik zasažených pandemií covidu-19. -A je pod tlakem Evropského parlamentu, aby postoupila dál a použila mechanismus umožňující odebrat unijní dotace zemím porušujícím principy právního státu. -„Polsko by mělo odpovědět na toto vydírání ze strany EU vetem ve všech záležitostech, které vyžadují jednohlasnost,“ řekl Ziobro, šéf malé strany Solidární Polsko, bez jejíchž hlasů by současná vláda přišla o těsnou většinu v Sejmu. -„Polsko by také mělo zvážit své závazky v unijní energetické a klimatické politice, které vedou k drastickému nárůstu cen energií,“ dodal Ziobro. -Pokud spor bude pokračovat, budu požadovat, aby Polsko zastavilo své příspěvky do EU. -Bylo by to ospravedlnitelné vzhledem k tomu, že EU nám protiprávně odpírá prostředky ze společného rozpočtu, do kterého také přispíváme, dodal polský ministr. -Jeho strana v přístupu k EU zaujímá radikálnější postoje než vládnoucí strana Právo a spravedlnost. -Podle Evropské komise změny, kterými prošla polská justice za Ziobrova působení, ohrožují její nezávislost a podřizují ji politikům. -Brusel podle Ziobra staví „nemožné podmínky, protože jeho cílem není právní stát, ale změna vlády v Polsku“. -Varšava čelí „politickému diktátu prováděnému vydíráním a snahou podrýt demokratické rozhodnutí několika milionů Poláků“, řekl také Ziobro. -Uvedl, že Polsko by mělo být členem takové EU, která je založena na partnerství suverénních států, a ne na vládě těch nejsilnějších a bruselské byrokracie, která není pod demokratickou kontrolou. -Řekl, že jeho strana nikdy nepřistoupí na takové ústupky Bruselu, v jejichž důsledku by byla omezena suverenita Polska. -„Nikdy nebudeme souhlasit s tím, aby mělo Polsko status kolonie,“ prohlásil. -Ale vyznat se v tom... U nás v Lidlu mají jeden druh sýra na čtyřech různých místech. -Nekoukal jsem po ostatních věcech, jeden jogurt jsem potkal taky vícekrát, potřeboval jsem jenom sýr, parmazán, po deseti minutách u mléčných regálů jsem rezignoval a zeptal jsem se. -Měli ho, to je pravda, v tom úzkém sektoru byly všechny vybrané, méně obvyklé a speciální sýry, ale bylo to mezi zeleninou a bezlaktózovou zónou... -Pokud bude vyhnutí, už do žádnýho marketu ani nepáchnu, akce neakce, zlatej krám na náměstí, tam třeba nemají takový výběr, ale většinou mají úplně všechno, co potřebuju a má to nějakej řád, takže mám za deset minut hotovo. -Do Lidlu abych si bral dovolenou. -A mě je to už úplně jedno. -Dva roky sleduju, jak se tady nakládá s daty jako s hnojem, většina odpůrců očkování je jen o trochu víc mimo mísu než většina zastánců očkování. -Racionální diskuze sice na odborné úrovni probíhá, ale do veřejného prostoru se dostávají jen extrémní názory. -Neustále ode zdi ke zdi. -Binární uvažovaní: očkování nás spasí, očkování je k ničemu. -Všechno zakázat, všechno povolit. -Barevné koláče místo robustních analýz. -Porovnávání jablek s hruškama. -Tenhle stát to má takhle a my to máme takhle. -Ale že metodika sběru dat je v těch dvou státech odlišná nikdo neřeší. -Fuj, tak jsem si ulevil. -Sorry za výlev a přeju hezký den Vám všem. -Mě v dětství trestali vařečkou. -Nikdy to nebylo kvůli známkám, většinou šlo jen o to, že jsem opakovaně odmítla poslouchat a zlobila (četla si místo toho abych šla spát, prala se s bráchou atd). -Zároveň jsem nikdy nebyla potrestána bez upozornění, mamka vždy nejdřív vyhrožovala, že jestli to udělám ještě jednou, tak dostanu ránu (občas dokonce i po dalším "načapání" vařečku pouze přinesla a položila tak, abych ji viděla). -Teprve poté, co jsem opakovaně odmítla poslouchat, jsem dostala pár ran na zadek (přes oblečení). -Osobně si myslím, že fyzické tresty (v rozumném provedení a míře) jsou prospěšné, protože dítě na ně reaguje mnohem více, než na slova. -Důležité je podle mě ta část s varováním, protože to dává dítěti svým způsobem na výběr zda neposlechne a dostane, nebo se polepší a trest nebude. -Mně nakonec většinou stačilo pouze varování abych začala poslouchat. -Obrana systému -Když významný český právník a ústavní soudce Vojtěch Cepl v roce 1999 odpovídal na novinářský dotaz, co pro něj znamená česká ústava – jestli posvátnou listinu, na niž se přísahá a o které se odmala učí ve škole, nebo naopak dohodu, kterou lze v případě potřeby změnit, rozhodně se přikláněl k prvnímu pojetí. -Jednou jsme se v ústavě shodli na demokratických pravidlech našeho života, která zároveň definují, kdo jako stát a jeho občané jsme, a se změnami je lépe šetřit. -A představte si: některé národy mají svá pravidla dokonce rády. -Tak jako mají Češi rádi knedlíky s vepřovou a zelím, glosoval tenkrát otázku Vojtěch Cepl. -V poslední době se však mezi právníky stále častěji objevuje názor, že Ústava ČR změny potřebuje. -Léta je testována situacemi, které její tvůrci (k nimž Vojtěch Cepl patřil) nemohli předvídat, například s chováním přímo zvoleného prezidenta. -V jedné věci měl ale Cepl pravdu. -Vše, co o takových dokumentech víme, ukazuje, že politické zásahy do jejich textů musí být promyšlené. -Ústavu je třeba chápat a aktivně bránit, jen pak může být klíčem pro zvládnutí většiny krizí, které společnosti napříč dějinami potkávají. -Ústava je mimo jiné jakýsi řád vládnutí sestávající z jednotlivých pravidel, která stanovují politikům mantinely hry. -Bojíme se, že moc bude zneužita proti menšinám či jednotlivcům, takže politiky svazujeme zákazy. -Zároveň ale ústavní texty také umožňují, aby politici svou moc. -Covid si nevybírá, v FN Brno bojují o život několikaměsíčního miminka -Přestože se ví, že koronavirus bývá k dětem mírnější, najdou se i těžké případy, se kterými se nemocnice potýkají hlavně v poslední době. -„Víme, že děti jsou ohrožené a zasažené méně než dospělí, mluví se o 2 až 5 procentech v porovnání s dospělými,“ řekl Novinkám primář Kliniky dětské anesteziologie a resuscitace Fakultní nemocnice Brno a Lékařské fakulty Masarykovy univerzity Petr Dominik. -Průběh bývá podstatně jednodušší, lehčí, často probíhá bez příznaků. -Jsou ale dětští pacienti, kteří jsou s koronavirem nemocní těžce, což vidíme zejména v posledním období," dodal Dominik. -Podle lékaře se jedná o desítky dětí, které potřebují mírnou podpůrnou péči. -Ta se odehrává na klinice dětských infekčních nemocí. -Opravdu velmi závažně nemocné děti s koronavirem jsou až v poslední době na ARU. -Děti s postcovidovými syndromy byly na oddělení dle lékaře kontinuálně v průběhu celého roku. -„Teď je nárůst dětí s akutní covidovou pneumonií, to znamená se zápalem plic, která vyžaduje pobyt na resuscitačním lůžku,“ řekl a dodal, že tato nemoc zasahuje vedle dospělých jak adolescenty, tak i několikaměsíční miminka. -V nemocnicích se vyskytují i děti v těžkém stavu kvůli koronaviru. -„V současné době u nás leží dítě několikaměsíční i adolescentního věku,“ uvedl primář. -Má ovšem radost, že v dětské nemocnici FN Brno zatím na koronavirus nezaznamenali žádné dětské úmrtí. -Podle dostupných dat došlo v ČR k 6. prosinci k úmrtí šesti dětí ve věku 0 až 14 let. -Podle Dominika je v dětské nemocnici – nejen na koronavirovém oddělení – nedílnou součástí spolupráce psychologa. -Zároveň také upozorňuje na fakt, že stejně jako u dospělých, tak i u dětí očkování zmírňuje průběh nemoci a současně snižuje výskyt postcovidového syndromu. -„Proto aplikaci očkovací dávky doporučujeme i u dětí,“ dodal lékař. -V klidové zóně můžeš chodit pouze po vyznačených cestách. -Ale ty klidové zóny nejsou až tak velké. -Jsou vidět na turistické mapě na mapy.cz. -Obecně v národních parcích mimo klidovou zónu můžeš chodit kdekoliv (ale nelez přes plot do obory). -Lyžovat/jezdit na kole v lese mimo označené cesty se nesmí nikde pokud nemáš výjimku (ale samozřejmě se to mimo národní parky až tak moc nehlídá). -Jak cvičí dirigent? -Hudba mi jede v hlavě, směje se Josef Kurfiřt. -Je odkojený libereckou operou a původně hrál na lesní roh. -Jako pěvec může zpívat prakticky veškerý repertoár a jako dirigent působí nejen v libereckém Divadle F. Šaldy, ale třeba také v Plzni, v Divadle Josefa Kajetána Tyla. -Spolupracuje s hradeckou filharmonií, s filmovou filharmonií nebo s podkrkonošským symfonickým orchestrem. -Čína buduje dojem zvládnuté nákazy a toho, že autoritářský režim čelí krizi lépe -Sinolog Jirouš: Čína buduje dojem zvládnuté nákazy a toho, že autoritářský režim čelí krizi lépe. -Čína vyrazila do zdravotnické i politické ofenzivy. -Před pár měsíci Peking odrážel kritiku za to, že nezvládnul zastavit nákazu, která se proměnila v globální pandemii. -Teď země hlásí nulový přírůstek nakažených. -Státy včetně Česka soupeří o čínské ochranné pomůcky a čínští lékaři mnohde pomáhají v boji s koronavirem, například i v nejhůře postižené Itálii. -Jak vnímat ochotu Pekingu? -Jde o přátelskou podporu nebo se komunistický režim snaží vylepšit svůj obraz ve světě? -Potká kamarád kamaráda a říká mu: hele nechceš sloníka? -Mám ho a je parádní. -Manželka je ráda, protože spásá trávu, auto chobotem umyje, děti si s ním hrajou. -No prostě paráda. -Jestli chceš za 5000,- ti sloníka prodám. -Kamarád: tak jo plácnem si, to bude paráda... -Po nějaké době se potkají a ten co koupil si stěžuje: ty vole, co jsi to prodal za slona???? -Trávník rozslapanej, všude obří hovna, auto rozsedl, děti se ho boji a manželka se chce nechat rozvést. -Ten co prodal říká: nepěkně mluvíš o slonikovi, takhle sloníka neprodáš... -Světové dění je ovládané velmocemi. -Přestože platí rovnost suverénních států, jsou to právě velmocí, které určují kurz mezinárodního dění. -Evropa se takovou velmocí může stát jen pokud bude pracovat na své integraci. -Ta zatím funguje na úrovni hospodářské a politické (ve vybraných otázkách), chybí však ještě integrace vojenská. -Osobně si myslím, že Evropa směřuje k federalizaci. -Nebude to za 10, 15 nebo 20 let. -Ale třeba v polovině století budou nálady jiné a povede se to. -Taky mě to napadlo a je to dost možné. -Nejsem expert na češtinu, takže možná slovíčkařím. -Jenom vycházím z toho, že elipsa většinou zahrnuje 2 různé jednotky na stejné úrovni. -Vypůjčím si příklad z jiného komentáře “španělské pomeranče a mandarinky”, kdy je jasné, že se jedná o obojí ze Španělska, kdežto u “španělské ovoce a mandarinky” už u mandarinek není jasné, že jsou ze Španělska. -Navíc to zakládám na tom, že věta zní “všech amerických sil” tedy včetně jejich zbraní a že vím, že americké komplexy jsou operovány jen Američany. -Jinými slovy tedy očekávám, že už je to zahrnuto v tom širokém pojmu a není potřeba to dále specifikovat u amerických sil. -Ale znovu, třeba jenom slovíčkařím :D -Ať je to tak, či tak, je to nesmyslný požadavek -V Tokiu bylo zaznamenáno zemětřesení o síle pěti stupňů. -Japonskou metropoli Tokio a okolní oblasti v neděli zasáhlo zemětřesení o síle 5,0 stupně. -Svědci uvedli, že v hlavním městě se chvěly budovy, žádné škody ale zatím nebyly hlášeny. -Nebylo vydáno ani varování před cunami, informovala agentura Reuters. -Vicki Holland z Británie mučila opičku kosmana Milly -Děsivé záběry ukazují okamžik, kdy se vyděšená opice krčila v záchodové míse, než ji její bezcitná majitelka spláchla a vysmála se jí. -Hollandová opici také krmila párky, kebabem a hamburgery, bez ohledu na její skutečné výživové potřeby. -Magistrátní soud v Gwentu jí nyní doživotně zakázal chov zvířat, informoval deník The Sun. -Odborníci na rehabilitaci opic, kteří se o Milly po jejím týrání starají, prohlásili, že tak vyděšeného kosmana ještě nikdy neviděli. -Milly strávila téměř dva roky rehabilitací u pracovníků Monkey World v Dorsetu a nyní si opět spokojeně hraje s další zachráněnou opičkou jménem Moon. -Čtyřnásobná matka se přiznala ke dvěma obviněním ze způsobení zbytečného utrpení chráněnému zvířeti. -U magistrátního soudu v Gwentu jí byl uložen dvanáctitýdenní trest odnětí svobody s podmíněným odkladem na jeden rok. -Hollandová byla rovněž odsouzena ke 120 hodinám neplacených prací, dostala doživotní zákaz chovu zvířat a musí zaplatit v přepočtu 12 000 korun soudních nákladů. -Vedoucí týmu Small Monkeys, který týrané zvíře rehabilitoval, Steph Sawyerová, řekla: „Milly je v pořádku, ale rehabilitace bude ještě dále pokračovat.“ -Milly trvalo, než si znovu zvykla na lidi. -Krčila a schovávala se před každým, koho potkala a jakýkoli hlasitý zvuk nebo náhlý pohyb ji přiměl ke křiku. -Opička odmítala dlouho dobu i jíst. -I teď, když se usadila a je spokojená se samcem, může v ní pohled na nové lidi stále vyvolat paniku. -Psychické jizvy z týrání ji budou provázet navždy, dodává Sawyerová. -Týrání Milly vyšlo najevo poté, co policie ve Gwentu objevila děsivé záběry v ženině telefonu po razii v jejím bytě kvůli obvinění z drog. -Na záběrech lze slyšet, jak Milly vulgárně nadává. -Na dalším videu je slyšet, jak Hollandová opičce nabízí kokain a říká: „Chceš kokain?“ -Tak mi olízni prsty. -V květnu se spolu se svým partnerem Russellem Coxem (43) přiznala k držení kokainu s úmyslem jej prodat. -V jejím domě byl nalezen kokain za 1600 liber (v přepočtu necelých 50 tisíc korun) ukrytý v Kinder vajíčkách. -Cox byl následně uvězněn na 30 měsíců a Hollandová dostala podmíněný trest 20 měsíců. -A jaké úžasné koncepty naučíš z toho pravěku třikrát po sobě, když statečně přeskočíš celé 20. století? -Jakože si v prváku odneseš ty samé věci jako v šesté třídě? -A celé to zabíjí idea biflování, kdy nikoho, čest výjimkám, nezajímá, jestli to umíš nebo chápeš. -Hlavně napsat test na 1 a potom už to nikoho nezajímá. -Běž a na ulici se náhodně lidí ptej, jestli zvládnout určit charakter kořenů kvadratické rovnice a koeficienty. -Všichni si tím prošli, a absolutní většina si ani neškrtne a řeknou ti, že mají naprosto v piči. -Tak na co se to kurva učí? -Já jsem velký fanda všeobecného přehledu a skutečnost je taková, že lidi nechcou a nemají potřebu. -A v tom okamžiku je to zbytečné a nikdy to do nich stejně nedostaneš. -Částečně se to učí pro tu spoustu lidí, která tu kterou věc potřebovat bude. -Ale jako, tu poznámku že na stání u stroje tohle všechno fakt nepotřebuješ, jsem myslel naprosto vážně... ...protože prostě nepotřebuješ. -Navíc se pomalu dostáváme do doby, kdy nevědět je známka punku. -(Na čemž se u nás nejspíš podílí i naše komunistická minulost a štvaní proti vzdělancům a elitám) Ovšem cena toho, že nám u těch strojů budou stát barbaři je prostě vysoká. -Kdyby třeba novináři uměli počítat, tak tady covid nejspíš nikdy nedosáhl těchhle rozměrů. -Svatba na první pohled: Válka Kadriho a Andrey pokračuje! -Což je hlavní důvod, proč nemůže Švýcarsko opustit hned," odpověděla Andrea na instagramu ve Stories na všetečné otázky zvědavých fanoušků, co jí na Kadrim tak zklamalo, že se rozhodla ukončit veškerý kontakt a dokonce si ho zablokovat na sociálních sítích. -Mezi Kadrim a Andreou to skřípalo už od začátku experimentu. -Hlavním problémem byl fakt, že Kadri žil a pracoval ve Švýcarsku a jeho představa byla, že se za ní Andrea odstěhuje, tedy alespoň do doby, než se do Čech vrátí natrvalo. -Ta to ale rezolutně odmítala. -A jak je vidět, jejich vztah nejen že neskončil láskou, ale spíše přerostl ve vzájemnou neúctu až nenávist. -Byl to od tebe naplánovaný útok! rozčiloval se obratem Kadri v reakci na Andreino obvinění ze lží, gamblerství a dluhů. -Údajná upřímnost Kadriho teď už exmanželky se nelíbila ani jeho mladší sestře Lindě. -Ta se rozhodla bratra veřejně zastat. -Normálně se k takovýmto věcem vůbec nevyjadřuji a ani v rodině jsme si nikdy moc extra tyhle věci neříkali. -Nechci rozhodně vyvolávat nějakou lítost. -Když ale vidím někoho, kdo se snaží veřejně ubližovat a špinit jméno někoho, koho mám tak moc ráda, tak mi to prostě nedá! -Je mi líto, že to musím dělat touhle cestou, ale chtěla bych tímto veřejně poděkovat mému bratrovi Kadrimu za to, že ze sebe udělal charakter a pomohl naší rodině, když jsme to nejvíc potřebovali i přesto, v jak mladém věku byl. -O to víc mě mrzí, když musím číst takhle nepravdivé informace, které jsou pravděpodobně dost vytržené z kontextu. -Hrozně bych všem přála znát Kadriho tak jako já, naši blízcí a rodina, stojí ve vyznání v reakci na slova Andrey. -Jsem mu opravdu vděčná za všechno! -Samozřejmě lidi budou věřit tomu, co se píše, ale nejdůležitější je, že my, jeho rodina, ho nade vše milujeme a známe skutečnost a víme, jak to doopravdy bylo, dodala neurčitě. -Opilý zloděj vyšplhal po fasádě do pátého patra. -Neuvěříte, kvůli čemu. -Svou loupežnou výpravu zahájil Cchao na parkovišti v rezidenční čtvrti, kde se snažil vniknout do několika aut. -Podle dostupných informací nakonec z jednoho vozidla odcizil v přepočtu necelých 330 korun. -Poté ho nenapadlo nic lepšího než vyšplhat do 5. patra a otevřeným oknem vlézt do bytu. -Tam ukradl dva banány. -Na záběru z jedné bezpečnostní kamery je pak zachycen, jak odchází po ulici pryč od místa činu a při tom pojídá banán. -Když se majitel bytu ráno vzbudil, zjistil, že banány nejsou tam, kde byly, a zavolal policii. -Ta následně Cchaa zadržela. -Muž se přiznal, že v inkriminovaný den popil něco alkoholu. -A vzhledem k tomu, že potřeboval peníze, v opilosti se rozhodl loupit. -Celá věc je zatím v šetření. -Opilec vyšplhal po fasádě do 5. patra, kde ukradl dva banány. -Pandemický zákon je časově omezený a účinností vázán na pandemickou pohotovost. -Když ta bude zrušena, zákon nebude účinný. -Zákon sice omezuje rozsah podnikání -To ti jako důvod nestačí? -Právo shromažďovat se bude omezeno, ale nikoli zrušeno. -K sobotním volbám do zastupitelstev čtyř obcí přišlo přes 60 procent voličů -Nová zastupitelstva lidé v sobotu volili v obcích Komňa na Uherskohradišťsku, Lužice na Mostecku, Nová Ves na Liberecku a v Rovné na Pelhřimovsku. -Počet zastupitelů v těchto obcích klesl pod zákonem stanovený počet nebo se zde zvolená zastupitelstva rozpadla. -O celkových 28 mandátů se v sobotu ucházelo 99 kandidátů. -Průměrný věk nově zvolených zastupitelů je 46,7 roku. -Nejstaršímu z nich je 69 let, nejmladšímu 33 let. -Zpracování výsledků sobotních voleb pro nás symbolicky uzavírá poměrně náročný, ale úspěšný rok. -Uskutečnily se v něm celkem čtvery nové či opakované volby do obecních zastupitelstev a především velmi sledované volby do Poslanecké sněmovny, uvedla místopředsedkyně ČSÚ Eva Krumpová. -Připomněla, že kvůli epidemii covidu-19 byly volby náročnější na vybavení i personální zabezpečení. -V Komni na Uherskohradišťsku vyhrálo sobotní volby Sdružení nezávislých kandidátů, které získalo 27,76 procenta hlasů a dva mandáty v sedmičlenném zastupitelstvu. -Kandidátka STAN obdržela 24,84 procenta hlasů, což také znamená zisk dvou mandátů. -Dvě křesla v zastupitelstvu získali i Občané pro Komňu, svůj hlas jim dalo 18,52 procenta voličů. -Znovu se za ně dostala do zastupitelstva též dosavadní starostka obce Jana Křižková, která je členkou Soukromníků. -Na jeden mandát v zastupitelstvu dosáhli Komňané – nezávislí kandidáti. -K urnám přišlo 75,48 procenta oprávněných voličů. -V obci Rovná na Pelhřimovsku zvítězilo sdružení Pro Rovnou. -Získalo 50,50 procenta hlasů, což znamená čtyři mandáty ze sedmi. -Do obecního zastupitelstva se dostali ještě dva zástupci ze Sdružení nezávislých kandidátů 1 a jeden ze Sdružení nezávislých kandidátů 2. -Volební účast byla 93,62 procenta. -Opakované volby v Lužici na Mostecku znovu vyhrálo Sdružení Lužice a Svinčice vedené starostou Jindřichem Johnem. -Získalo 56,73 procenta hlasů a stejně jako v roce 2018 tak má v sedmičlenném zastupitelstvu čtyři mandáty. -Druhá skončila kandidátka Obec pro lid, které hlas dalo 43,27 procenta voličů, v zastupitelstvu tak bude mít tři zástupce. -K urnám přišlo 76,7 procenta voličů. -Volby v Nové Vsi na Liberecku vyhráli nezávislí kandidáti Naděje pro Novou Ves před Hnutím ANO. -Pro sdružení nezávislých kandidátů hlasovalo 59,88 procenta voličů a získali tak čtyři místa v sedmičlenném obecním zastupitelstvu. -ANO dostalo 40,12 procenta hlasů a proti řádným volbám v roce 2018 posílilo, získalo o mandát víc a má tři. -Volební účast byla 42,9 procenta voličů. -V pondělí výsledky voleb projedná Státní volební komise. -Poté budou zveřejněny ve Sbírce zákonů. -Co myslíš, že by byl větší problém? -Mrtvý civilista nebo zahraniční politik? -Myslím, že tohle všechno co ti tady lidi píšou dobře víš. -Jen si hraješ na hlupáka aby ses měl s kým "hádat". -Pokud ne, tak je to smutné. -Netvrdím, že křesťané jsou degeneráti, ani nic takového. -Dokonce se mi i líbí spousta církevních staveb z estetického pohledu (což koneckonců byl cíl, aby vypadaly dobře). -A je mi celkem jedno, kdo v co věří. -Na druhou stranu mi vadí, jak moc církev měla moc ve středověku, kolik peněz nahrabala, potlačování vědy atd. -Nemluvě o všech válkách, co způsobila, např. ta třicetiletá -Tl;dr: věřte si třeba ve špagetové monstrum, ale stát a církev nemají dohromady co dělat -Muž spadl hlavou dolů z dvanácti metrů. -Náraz na beton přežil. -Neuvěřitelný pád přežil v neděli v noci muž v Ostravě, u něhož zasahovali záchranáři krajské Zdravotnické záchranné služby. -Pracovníci krajského operačního střediska převzali hodinu po sobotní půlnoci tísňové volání s prvotními informacemi o pádu muže z výšky. -Na místo ihned vyjely dvě posádky ZZS – lékařská a zdravotnická. -Po příjezdu na místo zdravotníci zjistili, že sedmadvacetiletý muž měl spadnout z okna z výšky okolo dvanácti metrů a dopadnout hlavou na beton! -Ve Vítkovicích chytlo uhlí. -Ale ne tak, jak by mělo, a do akce šli hasiči. -V okamžiku příjezdu týmů záchranné služby byl muž v bezvědomí, s mnohočetnými poraněními a v přímém ohrožení života. -Zasahující lékař zaintuboval jeho dýchací cesty, zajistil umělou plicní ventilaci a po dalších opatřeních v rámci přednemocniční neodkladné péče jej sanitní vozidlo přepravilo do další péče ostravského traumatologického centra, informoval mluvčí ZZS MS kraje, Lukáš Humpl. -Větší obavy než z koronaviru mám z neadekvátních reakcí veřejnosti i úřadů -Šíření koronaviru v Česku znamená výzvu pro politiky a úředníky, v první linii boje proti nákaze ale stojí především lékaři a zdravotnický personál. -Jak vážná je situace z jejich pohledu? -Ptáme se vojenského lékaře Davida Řezáče. -Editor: Matěj Válek Rešerše: Tomáš Roček, zvukový mistr: David Kaiser, hudba: Martin Hůla -Legendární Nunesová po sedmi letech padla, Oliveira obhájil pás -MMA zažilo parádní galavečer plný zajímavých výsledků. -Na turnaji UFC 269 se totiž děly věci. -Outsiderka Julianna Peňová dokázala porazit legendární zápasnici Amandu Nunesovou, která celých sedm let nenašla přemožitelku. -Charles Oliveira v lehké váze nezaváhal, předvedl proti Dustinu Poirierovi skvělé škrcení a obhájil pás. -Vítězství zaznamenal také Kai Kara-France, který rychle smetl Codyho Garbrandta technickým KO v prvním kole. -Svého soupeře porazil rovněž Sean O'Malley. -Překvapení, jaké nikdo nečekal. -To přinesl ženský zápas bantamové váhy mezi proslulou Amandou Nunesovou a Juliannou Peňovou. -Američanka vstoupila do vzájemné bitvy jako pomyslný „trpaslík“, Nunesová totiž sedm let neprohrála a brousila si zuby na další triumf. -Začátek duelu se navíc nesl v duchu papírových předpokladů. -Nunesová odstartovala cestu za vítězstvím hodně aktivně a soupeřce dokonce uštědřila push kick, kterým ji poslala k zemi. -Peňová se však nenechala donutit k žádné další chybě a sama se neúspěšně pokusila útočit pomocí páky na ruku. -Druhé kolo bylo strhující a pro fanoušky MMA velmi napínavé. -Obě soupeřky se počastovaly spoustou vynikajících úderů a tvrdých háků. -Peňová navíc Nunesovou dostala na zem, kde ji začala škrtit. -Ta musela snahu vzdát a atak odklepat. -Američanka tak připravila všem obrovský šok, když se stala novou šampionkou. -Vrcholem galavečera byla bitva mezi Charlesem Oliveirou a Dustinem Poirierem o titul v lehké váze. -Zpočátku se sice lépe dařilo Poirierovi, karta se však postupně začala obracet. -V tom druhém se snažil být aktivnější Oliveira, který soupeře zkusil udolat pákou na ruku. -To se mu sice příliš nepovedlo, poté si však vytvořil velký tlak, dostal soupeře na záda a zasypal ho řadou ran. -Díky tomu druhé kolo vyhrál. -Ve třetím kole Oliveira ukázal rear naked choke, jemuž Poirier chvíli vzdoroval, následně však musel škrcení odklepat. -Brazilec tak obhájil titul, Poirier naopak po dvou letech prohrál. -V dalším zápase si připsal suverénní triumf Sean O'Malley, jenž už v prvním kole nasadil na Rauliana Paivu tvrdou pravou zadní. -Následně ho dobil řadou přesně mířených úderů a zaznamenal patnáctý triumf. -Kara-France si pak dokázal poradit s Cody Garbrandtem. -Na jaře slavila Nunes další triumf s malou dcerkou, teď po sedmi letech prohrála. -Ten kyčel, nebo ta kyčel? -Na první pohled nejde o nic složitého. -Většina podstatných jmen v češtině vyjadřuje jen jeden gramatický rod, a není tedy u nich problém určit, zda se jedná o rod mužský, ženský nebo střední. -Pak je tu ale také poměrně početná skupina podstatných jmen, u nichž rod není ustálený. -Taková podstatná jména kolísají mezi dvěma rody. -Při skloňování pak nabývají dvojích koncovek a v některých případech zůstávají v nesklonné podobě. -Například slova „svízel“ či „kyčel“ jsou rodu mužského i ženského, v tom prvním se skloňují podle vzoru „stroj“, ve druhém podle vzoru „píseň“. -U další skupiny podstatných jmen jsou odlišné tvary už v prvním pádě jednotného čísla, třeba: „řádek/řádka“, „kedluben/kedlubna“ nebo „brambor/brambora“ (ve smyslu potravina). -Oba tvary jsou přitom spisovné, mají stejný význam, a jsou tudíž volně zaměnitelné. -Některé výrazy se mohou lišit regionálně, například „okurka“ v Čechách a „okurek“ na Moravě, v tomto případě je ale moravská varianta nespisovná, podobně jsou na tom i další česko-moravské dvojice slov: „příkop“ a „příkopa“, „kobliha“ a „koblih“ apod. -Některá slova, jež do češtiny pronikla z jiných jazyků, byla původně nesklonná, postupně však přejímají české koncovky. -Typickým příkladem je výraz „image“, který je rodu mužského i ženského, nebo slovo „bufet“, jež zůstalo ve středním rodě nesklonné, v mužském ale má koncovky podle vzoru „hrad“. -Turecko otevřelo cestu pro migranty do Evropy. -Jak vypadá situace přímo na řeckých hranicích? -Na řecko-turecké hranici panuje napětí kvůli rostoucímu počtu migrantů, kteří se pokoušejí dostat dál do Evropy. -Tisíce lidí se k jižnímu pomezí schengenského prostoru začaly vydávat poté, co jim Ankara přestala bránit. -Evropští politici slibují Řecku podporu, humanitární pomoc chystá také česká vláda. -Co vlastně motivuje uprchlíky k nejisté cestě? -A jak vypadá situace přímo na místě? -Tři měsíce jsme neviděli modré nebe a dusili jsme se, popisuje novinářka ze Sydney -Ničivé požáry, s nimiž se Austrálie potýká už čtvrtý měsíc, zabily skoro tři desítky lidí a stovky miliónů zvířat a zpustošily milióny hektarů půdy. -Jak se tamní úřady i samotní obyvatelé s pohromou vyrovnávají? -Mohla vláda premiéra Morrisona dělat víc, aby drastickým dopadům předešla, jak tvrdí kritikové? -A na co se země v souvislosti se změnou klimatu bude muset připravit do budoucna? -Lenka Kabrhelová mluví s novinářkou ze Sydney Ikou Detrichovou. -Falešný obvinění vždycky byly a jsou dost vzácný. -Proto se o každém vždycky píše úplně všude. -Pro lidi je nepříjemný řešit jak v naší společnosti vypadá sexuální násilí a jak hrozně rozšířený je, tak se to snaží zamluvit. -Neznám osobně nikoho kdo byl falešně obviněný. -Znám ale spoustu lidí co byly znásilnění a zažil jsem jak se k těm lidem často chová jejich okolí nebo i policie. -Oběti by se mělo vždycky věřit. -Trendem se stalo že se oběti konečně otvírají o svých traumatech. -Pořád si to ale až moc lidí nechává pro sebe. -Ano najdou se i tací, kteří někoho falešně obviní. -Je to hnus a plivnuti do obličeje všem obětem sexuálního násilí, ale šířením myšlenky že "velká část obvinění je smyšlená" a že je to "trend" pomáháš pouze sexuálním násilníkům. -Česko zaplavují rozestavěné domy, rodiny nemají peníze na dokončení -Ceny stavebního materiálu v posledních týdnech a měsících vzrostly o více než 30 procent. -Řada lidí se kvůli tomu dostala do svízelné situace. -Nemá totiž prostředky na dokončení rozestavěných rodinných domů a banky jí odmítají navýšit hypoteční úvěry. -Kromě cen materiálu se zvyšuje i cena stavebních prací. -Lidé proto nemají dost peněz na dokončení už rozestavěných rodinných domů. -V mnohých případech jim banky odmítají navýšit hypoteční úvěry, což vytváří extrémně nepříjemné situace. -V lepším případě se lidé stěhují do rozestavěných a nezkolaudovaných domů. -V horším případě jsou rozestavěné domy neobyvatelné a rodiny jsou nuceny je prodat, protože si nemohou dovolit splácet hypotéky a k tomu ještě platit nájemné," říká ekonom BHS Štěpán Křeček. -My za rok postavíme dva až tři rodinné domky a u padesáti procent se nám to stalo. -Pro nás jako stavební firmu je to těžké v tom, že musíme dodržet některé věci smluvně, i když materiál zdražil. -Takže děláme bez výdělku," řekl majitel stavební firmy Zdeněk Slivoň. -Spousta lidí teprve bude mít finanční problémy. -Jestli počítali s tím, že je stavba domu vyjde na pět milionů, teď je to bude stát sedm. -Myslím, že někteří budou vyčkávat," dodal Slivoň. -Z materiálu nejvíc podražila měď, železo a také instalatérské a topenářské vybavení. -Stavební firmy ale také bojují s nedostatkem pracovních sil. -V Česku chybí absolventi stavařských oborů a příliv zahraničních dělníků brzdí pandemie. -Příznivější je v tuto chvíli pouze situace ohledně vydávání stavebního povolení. -„Stavební úřady v říjnu vydaly 7 675 stavebních povolení, což je téměř o 10 procent více než před rokem,“ upřesnil Křeček. -Máme se dobře a budeme se mít ještě líp. -Skutečné vize ale chybí, říká komentátor ČRO – mujRozhlas -Vstup do nového roku kromě klasických oslav už tradičně provázely i projevy politiků. -Letos k národu promluvili kromě premiéra a předsedy ANO Andreje Babiše a vánočního poselství prezidenta Miloše Zemana i předsedové Senátu a Poslanecké sněmovny. -Co zásadního jsme se dozvěděli? -Jako v tom mi přijde diplomka super právě, mám vlastní téma, které jsem si zvolil, navazuji na bakalářku, vždycky na tom dělám celý rok a pak za týden až dva sepíšu písemnou část. -Státnice jsou u nás úplně v klidu, pokud není člověk totální makak a něco mu v té hlavě zůstalo, tak ho ta komise nebude zbytečně dusit na teorii. -Já třeba se na státnice učil týden a když jsem byl v koncích, tak se mě komise vždycky snažila navést na nějaké logické odvození, které mi hned doplo a já to měl správně. -Jinak k těm projektům, znám lidi, kteří si někoho zaplatí, aby jim ten semestrální projekt udělal (dělali jsme to hodněkrát, opravdu přínosné, hodně naučí) a pak se jen naučí ten projekt a mají vystaráno. -Podle mě je super, když na konci předmětu je zkouška, do které se promítají znalosti získané v rámci projektu, ne jen obhajoba. -Všechno ok ale nezahlcuj e-maily a telefony a neposílej žádný balíky na ambasádu. -Budeš stejnej čůrák jak oni. -Ti lidi na ambasádě s tím nemusí mít nic společného. -A kdyby byli proti Rusku, tak by hodně riskovali, takže možná musejí hrát s nimi, protože jinak by se jim něco mohlo stát. -Ale klidně můžeš vedle sochy medvídka Pú postavit podobnou sochu Putina. -Třebas ji ještě dát tak, aby sahazal Xi Jinpingovi na zadek nebo tak nějak. -Souhlasím, i přesto, že Insta hází umělcům klacky pod nohy. -Jakmile každý den nedáš stories a aspoň každý druhý den nový obrázek, dosahy se ti sníží na naprostý minimum. -Navíc furt mění, co za funkci je důležitější, jestli lajk, komentář nebo uložení. -Strašně mě to v poslední době sere, takže se možná budu muset snížit k tiktoku, kde hodně umělců z moji branže má úspěch a nedají na to skoro dopustit. -Nakonec bych byla možná i ráda, kdyby přišlo něco uživatelsky vřelejsiho, co nevycucáva z umělců veškerou kreativitu a energii -Prohlášení Strany pracujících na Donbase -Svazu – ano, rozbití – ne, vyjadřují na snímku svůj názor odpůrci rozbití SSSR. -Třicet let od nezákonného rozbití SSSR. -Osmého prosince 1991 proběhla největší geopolitická katastrofa v historii lidstva. -V Bělověžském pralese 8. prosince 1991 Boris Jelcin, Leonid Kravčuk a Stanislav Šuškevič, bez jakýchkoliv zákonných pravomocí a porušením výsledků referenda ze 17. března 1991, s otevřeným shovívavým postojem Michaila S. Gorbačova, tajně, bez ohledu na lid, podepsali dohodu, že „Svaz SSR jako subjekt mezinárodního práva a jako geopolitická realita přestává existovat“. -Jedním škrtem pera „zrušili“ obrovskou zemi s téměř třemi sty miliony obyvatel. -Rozpadem SSSR se desítky milionů etnicky ruských občanů ocitly v zahraničí. -Od počátku devadesátých let se populace Ruska snížila o deset až jedenáct milionů. -I bez ohledu na ztrátu neruské populace bývalých sovětských republik jsme již ztratili více lidí než ve dvou světových válkách dohromady! -Ještě dříve stejní lidé, kteří za jedno sezení v Bělověžském pralese zničili to, co bylo postaveno za předchozích sedmdesát let, zradili socialistický tábor (vytvořený cenou milionů životů ve druhé světové válce a ve Velké vlastenecké válce). -Vědomě provedli deindustrializaci, zabrzdili zemědělství, odtrhli od největší světové mocnosti čtrnáct republik, které předtím byly ekonomicky spojeny v jediný mechanismus. -Pokud se chceme podívat ještě hlouběji, pak vidíme zchudnutí obyvatelstva, rozpad ekonomiky, vědy, armády, růst kriminality, mezietnických konfliktů, válku v Čečensku, všechny konflikty v postsovětském prostoru, sérii oranžových revolucí, expanzi NATO na východ, válku a rozpad Jugoslávie, arabské jaro, válku v Sýrii – to vše je výsledkem geopolitické kapitulace, vzdání se nejdříve socialistického tábora, a pak Sovětského svazu. -Existuje takový pojem v politologii jako „vakuum síly“. -Všechno, co bylo ve spěchu zrazeno a vzdáno, rychle zaplnily a dobyly země NATO, které přijaly naši geopolitickou kapitulaci. -A celý svět je dodnes otřesen hlavně kvůli událostem z konce osmdesátých a začátku devadesátých let minulého století. -Cena výrobku prodávaného v supermarketu s velkým obratem nemusí přímo korespondovat s jeho jakostí a kvalitou. -Je pondělí a máme v supermarketu nějaké maso, které stojí 189 Kč / kg. -Já si jej koupím s tím, že jej dám do ledničky a udělám z něj ve čtvrtek večeři. -V alternativní realitě, ve které si to maso v pondělí nekoupím, jej v úterý řetězec zlevní na 99 Kč / kg - popiš mi mechanismus, kterým se z toho masa změnou jeho ceny stane přítěž pro můj zažívací systém? -A nebo si počkám do čtvrtka a to maso bude 1 den před datem spotřeby zlevněno na 69 Kč / kg - čím by se tohle maso lišilo od toho, co jsem koupil v pondělí o 120 Kč dráž a nechal jej 3 dny ve své ledničce? -Odpovím si sám - ničím. -Tyhle ty kecy o tom, že když je něco levné, tak to musí být zaručeně špatné, zkažené či nekvalitní jsou strašně retardované, abych ti pravdu řekl ;-) -Teď nedávno jsem šel v Brně na Hlavas a v podchodu slečny rozdávaly nějakou brožuru, podobný věci si vždy beru, abych pomohl brigádníkům, vyhodit to nemůžou... -No, brožura byla plná selského rozumu a konzervativních pohledů na fungování světa, ale nikde nic o bohu, byl jsem zmatený, ale tušil jsem, že to bude nějaká taková agitka. -Po dočtení jsem našel kdo to vydával a nakonec z toho vyšla scientologie. -No, jako byl to dobrej matroš, plný úplně zbytečných pouček, jakože se mám mýt a nebýt pičus. -Škoda papíru, kvůli tomuto by se pralesy kácet nemusely. -Něco podobného jsem zažil s bývalou přítelkyní. -Psychická manipulace a emoční vydírání vás přinutí tomu člověku vyhovět protože ho máte rádi aniž by jste si uvědomovali jak moc v p*deli ta situace je. -Několikrát mi vyhrožovala ze si ublíží kvůli tomu ze jsem šel s kámošem kterého neměla ráda ven pokecat. -Nebo taky když jsem chtěl z jejího bytu odjet dřív tak se rozbrečela a na kolenou mě prosila ať nikam nejedu. -Potom začala fyzicky blokovat dveře. -Cca rok to byl super vztah ale pak uběhl další půl rok a začlo ji šibat na palici. -Poté jsem ten vztah ukončil s tím ze jsem ji řekl ze se rozcházím a nalhal ji ale ze si o tom můžem ještě promluvit příští týden abych ji uklidnil a aby nechytla zas nějaký amok. -Takový člověk z vás vysaje city, emoce a celkově radost. -Raději držet odstup -Když se tu zuřivě rozkrádají pojišťovny nákupem mraku testů a honem na pozitivní, kteří by o té strašní nemoci nebýt testu ani nevěděli. -Jediné, čeho dosahujeme je komplikace pro firmy, dopravce a další, kvůli tomu, že jejich zaměstnancům v generátoru náhodných čísel padlo pět dní domácího vězení. -Na západě už s tou šaškárnou přestali a uznali, že nemá cenu řešit nemoc slabší než ta slavná chřipka. -Bohužel Válek je novej a musí si ještě něco nakrást a pohonit si ego vymýšlením buzerací. -Koukám, že se nám opět vrací starý dobrý šmejd. -Po několika letech útlumu a potlačování tohoto nekalého byznysu, se opět MLM rekruting vrací na výsluní. -Byl jsem jeden z rekrutů, zkusil jsem si to (bylo mi 20let, prvák na VŠ), počáteční sliby o proškolení produktů i obchodních dovedností se rychle překlopilo do "to tě nemusí zajímat, hlavně sežeň lidi". -Upřímně mě zajímaly produkty, které nabízím, jelikož jsem chtěl lidem pomáhat, nicméně školení byly spíše na to jak člověka vystrašit a ukecat. -Když přišly první vydělané peníze, člověk si rychle uvědomil, že pokud si chce vydělat, musí našvihat několik jistých produktů měsíčně. -Investiční životní pojištění a hypotéky byly jediné výdělečné, čili se pak člověk cítil jako podomní prodejce hrnců. -Co ale můžu říct, zkušenost to byla cenná, člověk se poučil že nemá skočit hned na každý špek a informace důkladně prověřovat. -Zároveň bych všechny neházel do jednoho pytle. -Jsou lidé v tomhle byznysu úspěšní a dokonce i lidem prospěšní, ale ti se určitě nebudou chvástat drahým spotřebním zbožím či "tučným" kontem. -V korporátu kde dělám jsem moc takových věcí nezažil. -HR je v klidu, manažeři si hledí svého manažerování a moc nám do ničeho nos nestrkají. -Pravidelné hodnocení se teda nějak děje, ve formě jakou to probíhá u nás to docela jde (nastav si nějaké cíle na další rok, za rok se podíváme co vyšlo a co ne) - je to spíš sebehodnocení, než že by tebe hodnotil někdo podle nějakých čísel, a podobně. -Firemní akce taktéž nepovinné. -Ale jsme i v rámci naší firmy trochu unikát - jsou oddělení co jsou víc "korporátní". -Někdy má člověk pocit, že jsme takový skoro-startup, který squatuje v kancelářích velkého korporátu no. -Ale funguje to tak nás moc neotravují, dokud jsou výsledky. -Češka pohřešovaná v Británii je po smrti. -Její tělo nalezli v Londýně. -Téměř deset dní britská policie bezvýsledně pátrala po pohřešované 32leté Češce, která se ztratila na konci listopadu. -V neděli 12. prosince oznámil ministr zahraničních věcí v demisi Jakub Kulhánek na sociálních sítí, že ženu z Uherskohradištska nalezli mrtvou. -Britská policie dnes odpoledne našemu zastupitelskému úřadu v Londýně bohužel potvrdila, že nalezla tělo pohřešované české občanky. -Příčina úmrtí se vyšetřuje. -S ohledem na rodinu nebudeme k případu sdělovat více informací. -Upřímnou soustrast, uvedl na Twitteru Kulhánek. -Mladou ženu naposledy viděli 28. listopadu v autobusu při cestě z práce, před nastoupením si měla ještě vyzvednout peníze z bankomatu. -Její zmizení ohlásili kolegové z práce o pět dní později. -Následně po ní začala londýnská policie pátrat, Interpol ji zařadil mezi pohřešované po celém světě. -Objevila se tak i v české databázi pohřešovaných osob. -Policie v této souvislosti již před několika dny zadržela jednoho muže. -Jakou měl v případu sehrát roli a z čeho jej podezřívá, ale nezveřejnila. -Ve čtyřech obcích si ke konci roku nadělili nová zastupitelstva -V sobotu 11. prosince se v obcích Komňa na Uherskohradišťsku, Lužice na Mostecku, Nová Ves na Liberecku a Rovná na Pelhřimovsku volila nová zastupitelstva. -Počet zastupitelů v těchto obcích klesl pod zákonem stanovený počet nebo se zde zvolená zastupitelstva rozpadla. -O 28 mandátů se v nových volbách ucházelo 99 platných kandidátů. -Volební účast dosáhla 62,41 %. -Nejvyšší zájem zaznamenali v obci Rovná, kde volilo 93,62 % oprávněných voličů. -Mandát získalo celkem 8 žen a 20 mužů. -Průměrný věk zvolených zastupitelů je 46,7 let. -Nejstaršímu je 69 let, nejmladšímu 33 let. -Pro nové volby do zastupitelstev v uvedených čtyřech obcích bylo zaregistrováno celkem 13 kandidátních listin. -O 28 zastupitelských postů se ucházelo 36 žen a 63 mužů. -Průměrný věk kandidujících byl 46,6 let. -Nejmladšímu kandidátovi bylo 22 let, nejstaršímu 72 let. -Zpracování výsledků sobotních voleb pro nás symbolicky uzavírá poměrně náročný, ale úspěšný rok. -Uskutečnily se v něm čtvery nové či opakované volby do obecních zastupitelstev a především velmi sledované volby do Poslanecké sněmovny. -Museli jsme přitom většinou pracovat v náročnějších epidemických podmínkách, které kladly větší nároky na vybavení i personální zabezpečení, zhodnotila Eva Krumpová, 1. místopředsedkyně Českého statistického úřadu. -Poslední volební okrsek byl zpracován v neděli 12. prosince v 03:49 hodin. -V pondělí výsledky hlasování projedná Státní volební komise a po schválení budou zveřejněny ve Sbírce zákonů. -Tohle je největší problém, který s celou pandemií mám. -Vyrovnat se s tím že tu máme celkem nebezpečnou nakažlivou nemoc mi na úplném začátku chvíli trvalo, ale šlo to bez větších zádrhelů. -Vyrovnat se s tím, jak idiotsky se k tomu staví velká část populace na všech jejích vrstvách, s tím mám problém doteď. -Na očkování (zítra!) se nejvíc těším protože díky němu budu konečně míň závislý na tom, že ostatní lidi nejsou hovada. -Byl by zbaven funkce prezidenta a způsobilosti ji znovu nabýt. -Šance, že k tomu opravdu dojde je ale jak tady ostatní zmiňují hodně malá. -Navíc si nejsem jistý, jestli by se zkartovani spisu vůbec dalo za velezradu považovat. -Velezrada je takový čin, kterým prezident republiky ohrozí svrchovanost, územní celistvost, nebo demokratický charakter státu. -Muselo to být něco závažnějšího. -Jak pandemie ovlivnila intimní život: Roste počet lidí do 35 let, kteří jsou bez sexu celý rok -Čím dál více mladých dospělých v USA žije své životy bez sexu. -Jedná se hlavně o nábožensky založené lidi, uvádí web deníku DailyMail. -Průzkum ukázal, že od roku 2008 do roku 2021 se podíl lidí mladších 35, kteří se sexuálního života zříkají, zvedl z osmi na 21 procent. -Žen mezi 18 a 35 lety, které uvedly, že v posledním roce pohlavní styk neměly, je více než kdy dříve. -K poklesu počtu sexuálně aktivních jedinců přispívají i další faktory, ukázal průzkum Institutu pro rodinné studie (IFS). -Jedním z nich mohou být ekonomické dopady pandemie koronaviru a vyšší procento nezaměstnanosti. -Přispět mohla ale také přítomnost médií, sociálních sítí či videoher, které pro mladé lidi dělají ze sexu menší a menší prioritu. -„Od roku 2010 rychle roste podíl mužů a žen mezi 18 a 35 lety věku, kteří uvádí, že za předchozí rok neměli sex,“ uvedl výzkumný pracovník IFS Lyman Stone. -U lidí v manželství je sexuální aktivita častější, za rok 2021 pouze 5 % z nich uvedlo, že byli za poslední rok bez sexu. -U svobodných lidí to bylo 29 %. Stone dodal, že manželství ve věku pod 35 je pouze malé procento. -K úbytku sexuální aktivity pak přispívá také strach z předmanželského styku a náboženské založení. -Přestože je u manželů větší pravděpodobnost, že budou sexuálně aktivní, procento ženatých a vdaných mladších 35 let stále klesá. -Mladé lidi názor na předmanželský sex rozděluje, asi 30% ho má za špatnou věc, zatímco cca 70 % si myslí, že je v pořádku. -„Je pravda, že mezi svobodnými jedinci v této věkové skupině jsou minoritou, ale jejich chování tento trend posouvá,“ mluví Stone o těchto třiceti procentech. -U většiny z těch, kteří mají morální problém s předmanželským stykem, je důvodem náboženské založení. -Od roku 2008 se u svobodných lidí mladších 35 let, kteří se náboženských setkání účastní více než jednou měsíčně, se poměr abstinence zvýšil z 20 na téměř 60 %. -Mezi „méně věřícími“ se trend zvedl z 10 na 20 %," prohlásil Stone. -K poklesu sexuálních aktivit také přispívají další faktory, například méně sociální interakce a hlavně sociálního pití alkoholu za doby pandemie. -Studie také ukázala, že je pravděpodobnost sexu menší u lidí bez práce, či s nižšími příjmy. -Dalším důvodem může být rozšíření digitálních médií, které zřejmě snižují potřebu sexu. -Lidé tráví více času online, čímž tuto potřebu „nahrazují“. -Tento trend se uchytil hlavně za lockdownu v době pandemie koronaviru. -Celé nařizování vakcinace proti covidu je o tom, zda by měla společnost donutit část populace k chování, které oni nechtějí, ale které jim může zachránit život. -Je to poměrně těžká otázka, na které mě osobně nejvíc zajímá otázka společenského svědomí. -Tj. například otázka, zda když jim to nenařídíme a oni umřou, tak to bude naše vina. -Můj argument je, že bychom určitě mohli za smrt osmdesátiletého člověka, který vlastně moc nevěděl, dobře jsme mu to nevysvětlili, zaslechl nějakou dezinformaci, a v důsledku toho se naočkovat nedal a nakonec to chytil a umřel. -Na druhou stranu si nemyslím, že můžeme za smrt zatvrzelého odpůrce očkování, který tu po boku SPD a KSČ pořvává něco o šikaně a totalitním státu. -Z těch statistik co jsem zmínil lze celkem jasně dovozovat, že většina neočkovaných důchodců nejspíš patří do té druhé skupiny, tudíž si za to skutečně budou moct sami. -Česko bez sněhu. -Jak se mírná zima podepíše na boji se suchem? -Letošní zima dosud v Česku přinesla jednu z nejmenších sněhových nadílek za poslední dobu. -Provozovatelé lyžařských areálů se neobejdou bez technického sněhu, počasí komplikuje třeba i přípravu běžkařské Jizerské padesátky. -Jde o trend, nebo o výjimku? -A co bude málo sněhu znamenat pro boj se suchem v Česku? -Jednu historku bych měla, ale nejde v ní o pánbíčkáře. -Jednou na střední nás učitelka vedla přes celé město na loděnici, abychom se mohly projet na loďkách po řece. -Cestou tam jsme šly jednou takovou docela širokou ulicí a koho asi v prostředku nevidíme - stoupence sekty Hare Krišna. -Samozřejmě se na nás sesypali. -Já jsem naštěstí utekla, ale s jednou kamarádkou se dali do řeči. -Když od nich odešla, ptaly jsme se jí i s učitelkou, co jim říkala. -"Ptali se mně, jestli prý chci spasit svou duši. -Já jsem jim na to řekla, že žádnou duši nemám," odpověděla. -Všechny, včetně učitelky, jsme se smály celou cestu až na loděnici. -Jsme hrozně rozmazlení. -Neděje se toho tolik, ale systém už se hroutí, říká Orozovič -Nový německý kancléř Olaf Scholz po předchozích návštěvách Paříže a Bruselu v neděli přiletěl do Varšavy, kde jej s vojenskými poctami přivítal premiér Mateusz Morawiecki. -„Otvíráme novou kapitolu vzájemných vztahů,“ řekl Morawiecki na společné tiskové konferenci po jednání. -Scholz zdůraznil, že Evropa musí dát společně najevo, že nebude akceptovat porušení územní celistvosti Ukrajiny. -Krizi, vyvolanou znepokojivými pohyby ruských vojsk u ukrajinských hranic, je podle kancléře záhodno řešit s využitím diplomatických jednání, a to i v rámci „normandské skupiny“, sdružující Francii, Německo, Rusko a Ukrajinu. -Morawiecki uvedl, že kancléře informoval o situaci na polské hranici s Běloruskem, jehož vůdce Alexandr Lukašenko uměle vyvolal migrační krizi a používá lidi jako živé terče a zbraň, protože noc co noc zaznamenáváme stovku pokusů o (nelegální) překročení hranic. -S kancléřem jednal o dalších sankcích, aby Lukašenkův režim a jeho patroni v Kremlu konečně pochopili, že jsme rozhodnuti ubránit východní hranici EU. -Scholz podle agentury DPA ujistil, že Varšava se ve sporu s Běloruskem těší německé podpoře, a odsoudil nelidské zacházení Lukašenkova režimu s běženci. -Opilá polská řeholnice zavinila nehodu, snažila se to zatajit -Auto se po chvíli na místo nehody vrátilo, vůz však řídila už jiná jeptiška, která se pokoušela vzít vinu na sebe. -Když jí policisté sdělili, že i tak může za nedání přednosti v jízdě a ujetí z místa nehody přijít o řidičský průkaz, vyšla s pravdou ven, uvedla stanice TVN24. -Přiznala, že do vozidla narazila jiná jeptiška, která ji požádala o pomoc. -Policisté si pak pro sestru Celestinu přišli. -Podrobili ji dechové zkoušce a poté, co zjistili, že má v krvi přes dvě promile, okamžitě jí odebrali řidičský průkaz. -Současně jí oznámili, že se ze svých činů bude zpovídat u soudu. -Pes Hugo dělá co může. -Juraj Šajmovič ale svůj film neuhlídal. -Čeští tvůrci rodinných komedií se zhlédli v amerických příbězích o psích miláčcích. -Zapomněli ale při tom na podstatnou věc: zákonitosti filmového řemesla. -Po kýčovitém filmu F. Brabce Gump - pes, který naučil lidi žít, se teď o divácké emoce v kinech uchází další snímek Tady hlídáme my. -Spoluautor scénáře i režisér v jedné osobě Juraj Šajmovič ml. volně navazuje na svůj předchozí film Tady hlídám já z roku 2012. -Na scénu se vrací mluvící jezevčík Hugo a některé známé postavy kolem něj. -Julie a Ivan, majitelé šumavského penzionu, který skomírá, a tak do něj začnou zvát pejskaře, Juliin otec s partnerkou a hlavně její dcera Veronika. -Ta už není malou holčičkou, nýbrž dospívající dívkou, jež prožívá první lásku. -Režisér s partnerkou Beatriz Šajmovičovou (která je i producentkou filmu) se potýkali s postupy vyprávění už v předchozím psím snímku, tam ale bavily alespoň děti a pes. -Tentokrát napsala tvůrčí dvojice ještě slabší scénář, který vyvolává směsici úžasu a pocitů trapnosti. -Pojďme si to shrnout. -Julie, ač vědkyně, podléhá v touze po dítěti tmářským bludům a přijde-li správná „konstelace“, kopuluje se svým lesním inženýrem Ivanem, kdekoli souřadnice právě určí – na kapotě auta či kostelní věži (samozřejmě během probíhající exkurze s místním průvodcem), vysloužilý plukovník Mojmír navzdory letitému výcviku postřelí v lese vlastní dceru (Julii), která upadne do kómatu, načež je rodinou odebrána nemocnici, aby se v srdci šumavské samoty mohl odehrát zázračný proces uzdravení psem. -Nic proti očistné léčbě přírody a síle zvířecích miláčků. -Jejich majitelé vědí, proč je mají. -Divák ale žasne, jaký obsahový pelmel plný nevěrohodných situací i figur byl k tomuto poselství zapotřebí. -Dvojice zlodějek z personálu, soutěž pejskařů, šumavský šarlatán, policajti přijíždějící na udání hledat „drogy“ a diskutující nad bylinkami o hnojivé síle kostní moučky – a které vypečená rodinka v penzionu samozřejmě opije. -Když se hrdinka po těžkém kómatu probere a vzápětí sedí naondulovaná a nalíčená s cigárem u rodinného stolu a dožaduje se otcovy whisky a flákoty coby vyléčená vegetariánka, nelze se nesmát. -Navrch ještě divákovi tvůrci vysvětlí, že „to se někdy po kómatu stává“. -Šajmovičovu týmu chybí základní dramaturgické znalosti práce s textem, schopnost vystavět nosné situace, cit pro charaktery postav i pointu a režijní vedení. -Herecké výkony jsou nevyrovnané, střih bezradný a celkový dojem upatlaný. -Jakkoli se Lukáš Vaculík, Jitka Ježková či Nela Boudová snaží své party ustát, nemají příliš co hrát. -Jediným kladem filmu zůstávají poetické záběry šumavské přírody kameramana Vladimíra Holomka a dvojice jezevčíků. -Nestačí načrtnout několik postav, chatrnou zápletku a psí hlášky, natož lidovější vulgární výrazy, jichž se postavy dopouštějí. -Argumentem není ani letité členství v Klubu chovatelů jezevčíků - jako v případě paní producentky. -Za dobrými úmysly propagovat přírodu a přátelství člověka se psem musí být i znalost řemesla, chce-li člověk vyprávět uvěřitelný příběh. -To se v tomto případě nezdařilo. -Na dobrý rodinný snímek je tu trochu moc erotiky a minimum citu pro žánr. -Ani jako reklama na canisterapii by tenhle amatérsky pojatý kousek neprošel. -Ano, respekt, protože musejí poslouchat neustálý nadávky přesně od blbečků, jako jsi ty. -Je rozdíl mezi nabízením a nucením, tady je vidět, jak tomu hovno rozumíš, ale to je jen proto, že jsi si to nikdy nezkusil. -Rozhodnutí je vždycky na zákazníkovi, pokud on nebude chtít, odpověď bude vždycky ne. -Kdyby jsi takový kidy poslouchal furt, tak bys možná změnil názor. -Je to práce jako každá jiná, v něm případě brigáda na přivýdělek. -Blízký východ trápí nezvykle suché měsíce. -Zima je přitom jediným obdobím v roce, kdy prší. -„Takřka úplná absence srážek během listopadu, jak jsme to zaznamenali na některých stanicích, je neobvyklá,“ potvrzuje Izraelská meteorologická služba. -Například vesnice Kfar Giladi v severním Izraeli hlásí za listopad jen šest procent dlouhodobého srážkového průměru. -Dvoudenní déšť z tohoto týdne byl proto spíše výjimkou. -Pro nás je to dobré. -Dlouho tu nepršelo. -Také je to ta správná vánoční kulisa, radoval se obyvatel Nazaretu Wasím Aškar. -Srážky v Izraeli přicházejí takřka výlučně v zimních měsících, jsou nárazové a nepravidelné. -Na zimních deštích jsou závislé lesy. -Bez nich prosychají a jsou náchylné k požárům. -Nejde jen o lesy, ale také o zásoby pitné vody a závlahu pro zemědělce. -Největší sladkovodní zdroj v Izraeli, Galilejské jezero, se letos na jaře díky třem posledním deštivým zimám naplnil po okraj. -Od té doby hladina klesá. -Vodohospodáři varovali před suchem už dávno. -„Dá se očekávat, že v závislosti na globálním oteplování a změnách klimatu zde může ubývat srážek,“ předpokládala v roce 2018 mluvčí Izraelského úřadu pro vodní hospodářství Uri Schor. -Izrael si umí pomoci technologiemi, jako je odsolování nebo recyklace odpadních vod. -Hůř jsou na tom ekonomicky slabé země jako Libanon, Sýrie či Jordánsko. -V ulicích hlavního města Jordánska, Ammánu, přibývá cisternových vozů. -Vodovody a soukromé studny totiž vysychají. -„Letos mi stouply objednávky o sedmdesát až osmdesát procent ve srovnání s dvěma předchozími roky,“ hlásil v září řidič cisternového vozu Imád Sulejman. -V íránském Isfahánu propukly střety mezi farmáři a bezpečnostními silami. -Důvodem protestů bylo sucho. -Koryto místní řeky se ocitlo úplně bez vody. -Region má za sebou nejsušší listopad za mnoho posledních let. -Izrael připravuje vojenský zásah proti Íránu -Izraelský ministr obrany prohlásil, že jednání ve Vídni nepřinesla „žádný pokrok“ a že informoval Washington o přípravách útoku na íránská jaderná zařízení. -Ministr Benny Gantz v sobotu prohlásil, že nařídil izraelské armádě připravit se na možnost vojenského úderu proti Íránu, informuje Jonathan Lis. -Gantz, který pobývá v USA, se snaží přesvědčit Američany, aby vystupňovali svůj tlak na Írán, ale také informoval Washington o vojenských přípravách. -Během tiskové konference na Floridě Gantz řekl, že jaderná jednání ve Vídni nepřinesla „žádný pokrok“ a světové mocnosti „chápou, že Íránci si s nimi zahrávají“. -Před cca 3 lety se mi tohle taky stalo. -Slečnu jsem páčil trochu a griloval při rozhovoru, abych zjistil, co vlastně po mě chce. -Nakonec jsem zjistil, že moje podezření na pyramidu bylo oprávněné. -Protože tyhle podfuky opravdu nemám rád, tak jsem slečnu ještě chvilku prudil pochybnostmi a dotazy a nakonec poděkoval a odešel. -Klidně mi řekněte, že jsem grázl, ale pyramida je pyramida a finanční poradci jsou finanční poradci. -Lyžaři o víkendu vyrazili do hor, čekal je dostatek sněhu i hezké počasí -Horská střediska v Česku zažila o tomto víkendu první větší nápor zájemců o lyžování. -Po vydatném sněžení na konci pracovního týdne není o sníh nouze a některé skiareály díky tomu zahájily provoz. -Sjezdaře neodradila ani povinnost prokazovat se u vleků covidovým certifikátem. -Zatímco vlekaři si na nezájem zákazníků nestěžují, některé půjčovny lyžařského vybavení hlásí slabší poptávku po svých službách než před epidemií. -Do běžeckých stop i na sjezdovky vyrazily o víkendu tisíce lidí v Libereckém kraji. -Lyžařům přálo počasí, které dnes nabídlo slunce i výborné sněhové podmínky. -„Jsme spokojeni, ten zahajovací víkend opravdu vyšel už od toho pátečního večerního lyžování, kdy jsme měli na kopci první lyžaře,“ pochvaloval si zájem ředitel Sportovního areálu Ještěd Jakub Hanuš. -Stovky lidí vyrazily za prvním víkendovým lyžováním v nové sezoně také do Jeseníků. -Otevřeno měly například Ski Aréna Karlov nebo středisko v Branné na Šumpersku. -Víkendová návštěvnost byla velmi slušná, v sobotu i dnes přišlo odhadem 400 lidí. -Podmínky jsou suprové. -Dnes svítilo sluníčko, bylo kolem minus tří stupňů, takže perfekt, nešetřil chválou zástupce lyžařského střediska v Branné Rostislav Procházka. -Provozovatelé skiareálů mohou prodat skipas jen lidem, kteří jsou očkovaní nebo jsou ve lhůtě po prodělání onemocnění covid-19. -Až na výjimky jsou na to lidé připraveni a prokazují se potřebnými doklady, řekl ČTK René Hroneš ze skicentra Špindlerův Mlýn. -„Zaznamenali jsme jen jednotky incidentů,“ dodal. -Nižší zájem než před pandemií hlásí některé půjčovny a prodejny lyžařského vybavení. -Zájem o půjčení lyží naštěstí je. -Není to jako v minulých letech, ale pořád je zákazníků dost, uvedla Alexandra Bokišová z opavské prodejny Skiopava. -Větší nápor očekává v období lyžařských kurzů. -Také jednatel královéhradecké společnosti Snowbear David Šinták pociťuje, že kvůli covidu není o půjčování lyžařského vybavení takový zájem jako dřív. -Touto dobou před pandemií jsme byli už téměř rozpůjčovaní. -Oproti období před pandemií jsme asi na 50 procentech, řekl ČTK Šinták. -Lidé podle něj s pandemií zlenivěli a naučili se sedět doma. -Naopak velkou poptávku zaznamenává půjčovna v areálu Novako na Božím Daru. -S půjčováním lyží tam začali před týdnem a zájemci si je už nyní musí objednat předem. -„Běžky začínáme půjčovat tento víkend, lidé ale už volali dopředu, takže očekáváme velký zájem, stejně jako loni,“ řekla provozovatelka areálu Pavlína Nováková. -Zájem srovnatelný s obdobím před epidemií je podle ní také o lyžařskou školu. -Pokud chceme, aby úspěšní a bohatí neodcházeli do zahraničí, musí tu mít možnost prožít kvalitní život jako v zahraničí. -K tomu určitě nepatří socialistické zdravotnictví, kde není často možné sehnat zubaře nebo lékaře specialistu. -Do zahraničí odchází chytří a šikovní lidé, kteří tu nemají žádný majetek. -Majitel firmy do zahraničí fakt jen tak neodejde. -Se zbytkem ale naprosto souhlas. -Pokud se těmhle lidem nebude v ČR žít dobře a pokud tihle lidé nebudou mít v ČR vidinu rozumné budoucnosti, tak v ČR prostě žít nebudou. -Emigrace z Maďarska začala tak, že jednoho krásného dne vyhrál Orbán, rok vládl a najednou se roční emigrace zvedla o pár desítek tisíc. -Je naivní myslet si, že se ČR ze dne na den nemůže ocitnout v podobné pozici. -Další otázka jsou vůbec volby. -Když se tu bude žít blbě, tak tu taky může vyhrát nějaký tradiční V4kový šílenec. -Mladí a vzdělaní odejdou, zůstanou jeho podporovatelé a lidi, jejichž majetek se do letadla strčit nedá. -Děsivé foto! -Langmajer v krvi kvůli sázce o pivo? -Zatímco v Česku byl v plném proudu podzim, v Thajsku si štáb filmu Ostrov v čele s Jiřím Langmajerem (55) užíval tropického počasí! -Herec přitom na sociálních sítích zveřejnil zkrvavené foto obličeje. -Je to pravé zranění, nebo jde o líčení kvůli natáčení? -Londýnská policie stále pátrá po nezvěstné Češce. -Naposledy byla spatřena po cestě domů. -„Zmizení Petry se vůbec nepodobá jejímu chování a začínáme o ni mít velké starosti,“ uvedla v sobotním videu Lucy O'Connorová z policejního oddělení ve čtvrti Lambeth, kde Srncová pracovala. -„Její rodina v České republice o ni má taky velké obavy a zkrátka chce vědět, kde je,“ pokračovala. -Pohřešovaná Češka podle ní v neděli 28. listopadu okolo 19:45 odešla z práce a zamířila domů do čtvrti Camberwell. -Naposledy byla údajně spatřena v autobuse asi o půl hodiny později. -Její zmizení nahlásil 3. prosince někdo z jejích spolupracovníků. -Srncová podle britských médií působila jako „asistentka zdravotních sester“ v dětské nemocnici Evelina London Children's Hospital, která patří pod nemocniční sdružení Guy's and St Thomas'. -„Máme nesmírné obavy o naši drahou kolegyni Petru, která je nezvěstná,“ uvedla v twitterovém příspěvku skupina zdravotnických zařízení. -„Chtěli bychom pobídnout kohokoli, kdo by mohl mít jakékoli informace, které by ji mohly pomoct najít, aby kontaktoval policii,“ pokračovalo sdělení. -Veřejnost ke spolupráci vyzývá i poslankyně Harriet Harmanová, která v sobotu na případ Srncové upozornila při tiskové konferenci. -„Je nezvěstná už několik dní, je jí jen 32 let, je z České republiky, její rodiče jsou pochopitelně k smrti ustaraní,“ řekla labouristická politička, přičemž v ruce držela fotografii ženy distribuovanou londýnskou policií. -„Mám pocit, že všichni máme obzvlášť velkou zodpovědnost, abychom se ji pokusili najít, protože byla mimo svoji rodnou zemi, pryč od rodiny a pracovala tady pro naše zdravotnictví,“ uvedla Harmanová. -Policie už dříve v souvislosti s případem zadržela jednoho muže, který zůstává ve vazbě. -Podle zpravodajského webu BBC ovšem policie neposkytla žádné informace ohledně jeho totožnosti ani toho, z čeho je muž podezírán. -Rusko není schopno obsadit Ukrajinu, a určitě ne s 30 BTG (tj. ca 5 divizemi). -Takhle Ukrajinu nepodceňuji ani já. -To nejsou "obrovská množství", ale ca 8 procent ruské armády. -Všimni si, že Ukrajina neustále opakuje, že hrozbu Invaze zveličujeme a začíná mít našeho vstupu plné zuby. -Citoval jsem nahoře. -Nevím, z čeho si usoudil, že Rusko válku chce. -Válka je sakra drahá legrace a Rusko má HDP Itálie. -Srovnání se situací v roce 38 je mimo v tolika bodech, že ani nevím, kde začít. -To už mohu srovnávat s první punskou válkou a "anexí" sicílie :D -Umím si představit, že poté, co Ukrajina oznámila, že nehodlá dodržet dohody z Minska, Rusko anektuje ty směšné republiky. -To je tak vše a tomu by ta "koncentrace" na hranicích tak odpovídala. -Ono, pacta sunt servanda... -V Praze od neděle platí nové jízdní řády, dotknou se hlavně příměstských spojů -Cestující v pražské integrované dopravě (PID) čeká od neděle několik změn, které se týkají zejména příměstských spojů. -Vznikly nové linky, některé změnily trasu, a další naopak zanikly. -Do integrovaného systému se nově zapojuje Mladoboleslavsko. -V hlavním městě od neděle zastavují rychlíky od Českých Budějovic ve stanici Zahradní Město. -V příměstské vlakové dopravě vyjedou vlaky S7, které na trase z Berouna do Českého Brodu projíždějí pražským hlavním nádražím. -V nedávno zprovozněné stanici Praha-Zahradní Město bude nově zastavovat rychlík R17 z Českých Budějovic a Benešova. -PID se nově rozšíří do dalších oblastí. -Mimo jiné pojedou autobusy až do Světlé nad Sázavou, Blatna u Jesenice, Starých Splavů a Turnova. -Začleněny budou autobusy na Mladoboleslavsku včetně linek s přesahem do Libereckého a Královéhradeckého kraje. -Při integraci bude zrušeno 77 linek, zavedeno 37 nových a na 12 fungujících bude upraven provoz. -Z pražského Zličína vyjede nová autobusová linka 405, která pojede až do Žatce. -Vzniklo také nové přímé spojení Praha – Kralovice u Rakovníka, jež nahrazuje zrušenou vlakovou linku S53. -Posíleny budou v ranní špičce a o víkendu spoje z Prahy do Rakovníka, kdy nově vyjede expresní linka číslo 404. -Nově jsou do systému PID zahrnuty linky 400 a 410 jezdící do Libereckého kraje. -Vyjíždějí od stanice metra Střížkov, nikoliv ze stanice Nádraží Holešovice. -Páteřní linka 400 jede přes Mělník, Dubou a Českou Lípu do Nového Boru a vybrané spoje pokračují do Rumburka nebo Cvikova. -Doplňková linka 410 jede přes Mělník a Dubou do Doks, Mimoně a Jablonného v Podještědí. -Naopak zrušen či omezen je nově provoz na desítce středočeských lokálních tratí, mimo jiné do Mochova, Dobříše nebo Rožmitálu pod Třemšínem. -Zrušeny jsou všechny vlaky s odjezdem z Prahy ve 02:30. -Kvůli modernizaci železnice pokračují dlouhodobá omezení na tratích Praha – Beroun, Praha – Lysá nad Labem a v okolí Kolína. -Změny čekají na cestující také na dalších místech. -Autobusy nahrazují některé zrušené železniční linky nebo se rozšiřuje úsek na lince 420 z Dobříše s návazností z Prahy, kde je možné využívat jízdenky PID až do Milevska. -Změnu doznaly trasy linek 540 až 543 na Nymbursku a upraveny jsou trasy některých autobusů na hranici středních Čech a Hořovicka v Plzeňském kraji. -Zdravá svačina/oběd do kanceláře ze supermarketu -Ahoj, dělám klasicky 9-5 s přestávkou 30min a mojí jedinou možností, kde sehnat jídlo, je si zajít vedle do Billy nebo kousek dál do Lidlu. -Vzhledem k tomu, že nemám žádný pohyb, po práci nemám sílu cvičit, tak musím jíst co nejvíc zdravě a dietně. -Bohužel nikdy nevím co si koupit a v rychlosti koupím maximálně pizza žemle a ke svačině jogurt a jablko. -Otázka: co zdravého bez nutnosti tepelné úpravy byste mi doporučili v supermarketu koupit? -Ne každý čeká na metr sněhu jako vy, tak to bohužel je. -A nejde o stromy, které musíte nutně vidět. -Pod sněhem se může ukrývat jen špička stromku. -Pokud se poničí, stromek může být náchylnější na houbové onemocnění. -Netvrdím, že je to jediný důvod proč nám zakazují jezdit mimo sjezdovky, ale je to jeden z nich. -Přiznání Dary o vztahu s Nedvědem: Na tohle jsem se vůbec netěšila -Český showbyznysový rybníček od pátku nežije ničím jiným než odhalením vztahu Dary Rolins a Pavla Nedvěda. -Táhnou to spolu už od léta, slavný fotbalista se dokonce kvůli zpěvačce nechal rozvést. -Dara teď fanouškům poslala obsáhlý vzkaz, v němž vysvětluje, proč před nimi lásku dlouhý půlrok tajila. -„Dovoluji si tvrdit, že aktuálně neexistuje v Čechách ani na Slovensku nikdo, komu by uniklo, že Dara ulovila medvěda, teda pardon, Nedvěda,“ vtipkuje Dara Rolins, která je do nejúspěšnějšího českého fotbalisty zamilovaná až po uši. -Prý ale sbalil on ji, ne naopak. -Tři dny jsou středem pozornosti, a přestože jsou na zájem veřejnosti zvyklí, radost z toho nemají. -A je to tady. -To, na co jsme se oba netěšili, ale věděli jsme, že k tomu jednou dojde, pokračuje zpěvačka. -Jenom nevím, kdo je na tom hůř. -Jestli ti, které to vůbec nezajímá a vyskakuje to na nich i z konzervy, anebo my, jejichž životy dopodrobna rozpitvávají. -Jako kdyby chtěl někdo z vás slyšet názor na to, jestli se k sobě s manželem nebo přítelkyní dost hodíte, anebo jste trvali na tom, aby ale opravdu všichni dopodrobna znali seznam vašich bývalých partnerů a byli obeznámení s výčtem vašich omylů a chyb. -To je přece žúžo, to chceš, vadí Daře. -Dvojice se dala dohromady v Itálii, kam Rolins jezdila kvůli přípravám své nové módní kolekce. -Nedvěd tam zase dlouhodobě působí jako viceprezident fotbalového klubu Juventus. -S pravdou ven šli až teď, protože čekali na završení Nedvědova rozvodu. -S manželkou Ivanou byli už tři roky odloučení, papírově nejsou svoji ale teprve tři týdny. -Každopádně těm, kteří se radují s námi a přejí nám to, děkujeme. -I my jsme jenom lidi, máme rodiny, děti, minulost i sny. -Nejsme dokonalí, ale myslím, že máme oba srdce na pravém místě. -I proto svého nového muže miluji a stejně jako on při mě, stojím i já při něm. V dobrém i ve zlém, uzavřela Rolins. -Ahoj, ostatní komentující už asi všechno podstatné řekli, jen potvrzuju, že koleje jsou super do začátku, moji spolužáci se většinou v průběhu prvního semestru nebo dvou seznámili a skamarádili a našli si pak podnájmy spolu, což mi přijde jako nejlepší varianta, protože víš, s kým budeš bydlet. -Byty se většinou neinzerují moc dlouho dopředu, takže teď toho asi moc nenajdeš, ale podívat se na nabídku určitě neuškodí. -Jinak určitě se vyhni nejen Cejlu, ale i okolí (ulice jako Vranovská, Francouzská apod., to je dost špatná adresa), i některé části Židenic jsou trochu ghetto. -Naproti tomu čtvrť Veveří je hodně studentská, Královo pole a tím směrem je to fajn, kromě toho je to blízko k většině fakult VUT (nevím, kam přesně nastupuješ). -Já jsem teda nikdy nájem nesháněla, ale jsem rodilý Brňák, tak můžu případně poradit s Brnem jako takovým, kdyby ti ještě nějaké informace chyběly :) -To je mi ale příklad úplně "normálního" uvažování. -Kvůli tomu, co se rozhodlo udělat několik lékařů kdesi v Polsku, je vlastně úplně v pořádku, že stát dostatečně nefinancuje některé školy. -Buďto ať si učí co chtějí a platí si to ze svého nebo ať jedou podle státního a zaplatí to stát. -Přece nemůžeme nechat soukromého aktéra ovládnout kus školství jen proto, že k plnohodnotnému státnímu příspěvku přihodí pár korun a díky tomu si ve školách bude moct učit co chce. -Takové tvrzení poněkud ztrácí na váze, když ho napíše člověk, který se dva dny předtím takto vyjadřoval o petiční akci za bojkot totalitního státu: -Takže hlas proti tomu jestli někdo půjde na potrat je pro tebe stejnej jako hlas proti tomu aby nějaká socha stála na náměstí? -Byt tebou bych to konzultoval přímo s tím člověkem, který ti tu práci zadával. -Jinak já už jsem při katalogizaci/digitalizaci zažil, že (a to i dlouholetí profesionálové) to buď nějak hodí od oka, nebo zapíšou něco na způsob xxx *** nebo ... (dle konvence) a do poznámky, že to je nečitelné. -Pravda je, že v tomhle případě to docela čitelné je, takže bych to asi úplně nedoporučoval. -Osobně bych se s tím vypořádal asi nějak v poznámce podle toho jaký program používáš. -Kdybys chtěl být řádný a snaživý student, můžeš se podívat do nějakých znakových databází a vyhledat tu nejbližší. -Ale protože to vypadá, že čerpáš z nějaké knihy, tak bych si tipnul, že autor nebo tiskař si jednoduše vytvořil svůj vlastní znak, který sedí k tomu co je fyzicky na té minci. -PS: Není to náhodou spíš Odryská říše (království) než OdryNská? -PSS: někdo ti už to tady rozlouskal. -Mrkni na ten komentář s ΦΙΛH -Politici ani netuší, co bude "tématem" našeho předsednictví. -To je mnohem horší průšvih než to, že s sebou budou mít tlumočníky. -Představa, že by něco schválili, protože nikdo nerozuměl nějakému textu, je úsměvná. -Všechny důležité schvalované dokumenty se zkoumají slovo od slova, na to stejně nějaká základní znalost angličtiny nestačí, to je věc právníků. -V různých institucích EU pracují stovky překladatelů a tlumočníků, angličtina se politikům hodí spíš pro neformální styky, navázání nadstandardních vztahů. -Navíc s tou angličtinou je to celkem zajímavé, po vystoupení GB z EU. -Nechápu ten hate na Cejl. -Už třetím rokem tam pracuju a úplně v pohodě. -Kolikrát jezdím z práce I v 10 večer a nikdy žádný problém. -To že to je ghetto může říct jen někdo, kdo tam v životě skoro nevkrocil. -Jo, bydlí tam asi většina romské populace Brna, ale jediné co dělají je, že překáží na chodníku a parkuji kde nemají :D rozhodně to není, že bych se tam večer bál vylézt na ulici. -Takže pokud hledáte relativně levné bydlení s dobrou dostupností do centra, tak bych do toho klidně šel. -Plno bytů je tam teď nově zrekonstruovaných nebo nově postavených. -Felix Slováček (78) bez Dády i milenky Gelemové jako kůl v plotě! -S kým stráví Vánoce? -V neděli si většina lidí na adventním věnci zapálila první svíčku, Felix Slováček ne. -Já adventní věnec nemám, takže nebylo, co zapalovat. -Dády věnec jsem viděl a Lucie určitě taky nějaký má, řekl Blesku saxofonista, který tak potvrdil slova Patrasové, že ji často navštěvuje. -Navštěvuje, ale nebydlí v jejich domě na Vinohradech, kde Dáda po jeho odchodu zůstala sama. -Slováček stále neví, kde bude na Štědrý večer. -Nedávno jsme se sešli s Aničkou, Felixem a oběma vnuky. -VIDEO: Felix Slováček a Lucie Gelemová: ZASE SPOLU! -Felix Slováček a Lucie Gelemová: ZASE SPOLU! -Ale pořád jsme si něco povídali, takže na Vánoce nedošlo. -Opravdu nevím, kde budu. -Dárky kupuji průběžně a Dádě i Lucii určitě něco koupím, asi parfém. -Jsem gentleman, dodal Felix, který sám přišel na křest hudebního videoklipu do klubu Richman. -Jsem tady sám, ale sám se necítím. -Vždy si najdu někoho, s kým se rád pobavím, tvrdí Slováček, který byl rád, že potkal manželku Luďka Soboty Adrianu nebo zpěvačku Kamilu Nývltovou. -A dával to jasně najevo. -A my jsme Island, abychom si mohli dovolit nemít vojáky i zbraně? -Pochybuju, že nás někdo bude bránit za nás a naše lokace je natolik strategická, že by agresor musel být naprostý idiot, aby tohle území neobsadil. -A nechápu, proč by to měl být špatný argument, to mi teda vysvětlete? -Neznám žádnou jinou složku, která by mohla být v krizi nasazena v nemocnicích. -Policajtů je málo a nemůžou si to dovolit, tak samo hasiči a nikde jinde tak vysoké procento zdravotně proškolených lidí na takové úrovni není. -A to, že je naše armáda schopná ubránit tak akorát Ostravu jde na hlavu předchozím vládám, ne armádě, ta prosí o nové hračky už dost dlouho. -Rekordní sucho v ČR. -Je nutné změnit zemědělské dotace, krajina nemá být jen továrna na potraviny, říká novinář. -Česko prochází nejhorším obdobím sucha za poslední léta. -Vody podle vědců ubylo v horských a podhorských oblastech, menší úhrn srážek zaznamenávají i místa, kde dosud nouze o vláhu nebyla. -Příčinou sucha, které panuje ve velké části střední Evropy, je klimatická změna. -Dopady ale umocňuje i způsob, jak s půdou hospodaříme. -Na co se v souvislosti se suchem připravit? -A jak můžeme přírodě v těžkých časech pomoct? -Jo to bude tak, sotva se udržím v bruslích na ledě, houby vím co je hrát hokej a ta taktika (co jsem si zkusil v Franchise Hockey Manager) je taky celkově haluz. -A ať jseš hokejový tým "mědvěd Rusko" nebo "lama Čína" tak prohrávat o dva góly i když nemusíš za každou cenu zápas vyhrát je jiná než když je srovnáno. -Ale tak jako tak, co poslouchám české komentátory co vypichují, čeho si rozhodčí všímají nevšímají ať už mužského nebo ženského hokeje, je to celý divný, ale to je tak asi se vším sportem, UEFA a "Italští herci" nebo motorsport F1, WRC, atd. -kontroverze je všude. -A imho kdyby to bylo opačně tak je to jako vždy jindy, a že už Česko s Ruskem častěji prohrálo, než vyhrálo, asi by to bylo klasický "voni prohráli" vs "MY jsme vyhráli". -Koronavirus: Počet nákaz v Rusku překročil 10 milionů -V sobotu 11. prosince zaznamenala Česká republika 9080 denních nákaz. -Hospitalizováno je 5766 osob. -Celkem zemřelo v ČR 34 451 lidí, dalších 74. -Potvrzených případů za posledních 14 dní je v ČR 1967 na 100 000, 871 na 100 000 za poslední týden. -Počet nákaz v Rusku v neděli překročil 10 milionů. -Za posledních 24 hodin bylo registrováno 29 929 nových nákaz. -Je to nejnižší denní počet od 13. října. -Celkový počet registrovaných nákaz v Rusku je 10 016 896. -Počet denních úmrtí je 1132, je to nejnižší denní počet mrtvých od konce října. -Británie čelí "nevyhnutelné" velké vlně nákaz, způsobené Omikronem, řekla v neděli v televizi dr. Susan Hopkinsová, hlavní lékařská poradkuyně pro Britský úřad zdravotní bezpečnosti. -Bude zapotřebí nových karanténních opatření. -Osoby nakažené omikronem jsou už v Británii hospitalizovány a Hopkinsová očekává, že jejich počet poroste. -Zatím na omikron nikdo nezemřel, avšak k hospitalizacím dochází asi čtrnáct dní po nákaze a k úmrtím asi tři týdny po nákaze. -Šéf britských labouristů Keir Starmer konstatoval v neděli, že Boris Johnson zřejmě porušil zákon tím, že loni v prosinci, kdy byl zaveden lockdown a byly zakázány vánoční večírky, uspořádal v Downing Street vánoční kvíz. -Jeden ministr Johnsonovy vlády hájil, že kvíz se uskutečnil „virtuálně“, přes počítač. -Ovšem účastnily se ho v Downing Street skupiny zaměstnanců, shromážděných kolem počítačů. -V Británii sílí tlak na odstranění Johnsona z premiérské funkce. -Během loňského prosince, kdy byl zaveden v Londýně přísný lockdown a byly zakázány vánoční večírky, uspořádali Johnsonovi ministři navzdory lockdownu četné mejdany. -Britská veřejnost i média zuří, že si z nich Johnson a jeho vláda dělaly legraci: -Paul Brand, editor komerční televize ITV pro Británii: Dnes před dvěma lety zvítězil Boris Johnson drtivou většinou ve všeobecných volbách. -Dnes ráno hovoří Konzervativní strana o tom, že je třeba ho odstranit z funkce premiéra. -Pozoruhodné, jak rychle se události vyvinuly. -Chcete otočit a zachránit si kůži? -Maďarsko čekají na jaře volby, které mohou ukončit dvanáctiletou vládu Viktora Orbána. -Budou to volby celoevropského významu. -Jak moc se dá čekat, že budou férové? -Nebudou férové. -Nebudou nejspíš ani svobodné, protože takové nebyly ani poslední dvoje volby pod Orbánem. -Jeho strana Fidesz kontroluje média, mění hranice volebních okrsků, aby z toho profitoval, a dělá další menší či větší triky. -Ten zatím poslední zní tak, že každý může v praxi volit, kde chce. -To umožní Fideszu svážet voliče z rozhodnutých okrsků do těch, kde je výsledek nejistý a opozice by mohla uspět. -Takže jen opakuji, že fér nebudou vůbec. -Myslíte, že to bude stejně nefér jako v letech 2014 a 2018? -Vždyť situace je výrazně jiná. -Dřív šlo o to, ne jestli Fidesz vyhraje, ale jak hodně a jestli bude mít i ústavní většinu. -Nyní existuje reálná šance, že víc hlasů a mandátů získá sjednocená opozice. -To je pro Viktora Orbána a jeho stranu velký nezvyk. -Nebudou ve snaze udržet moc hrát ještě tvrději? -Ano, máme nějaké náznaky, že jsou připraveni jít za rámec toho, co dělali doposud. -Nedávno unikla do nezávislých médií nahrávka předsedy parlamentu a jednoho z vůdců Fideszu Lászlóa Kövéra, kde šéfům tajných služeb vykládá, že opozice je ohrožením pro národní bezpečnost. -To jsou ty náznaky nového přístupu, o kterých hovoříte? -Ano, to je jedna z těch novinek. -Všechno začíná u jazyka. -Byla jsem bitá docela často, naposled ve 14, rodiče mamka nemá moc trpělivost, já taky ne, taťka jí má dlouho, ale pak bouchne extrémně (jenom v souvislosti se mnou). -Zároveň jsem hodně cholerická a vztekala jsem se jako malá strašně, do tý míry, že jsem ležela na zemi v křeči a byla celá modrá, to mi asi 2x vzali pod sprchu, abych se zklidnila. -Občas mi pleskli výchovně, někdy to bylo spíš, že už nevěděli. -Rozhodně mam teď taky tendence řešit věci násilím, jako malá jsem se docela prala, teď si pro vybití aspoň do něčeho praštim a jako mladší jsem třeba rodiče tak plácla přes ruku třeba (abych nedostala moc přes držku), takže nikdy nic extrémního, ale vždycky to nutkání mam. -Nejsem schopná určit do jaký míry to je mou výbušností, ale určitě na tom má ta výchova taky svůj podíl. -Bojím se, že taky budu ztrácet trpělivost se svejma dětma a řešit to stejně. -Myslím, že bít děti je prostě špatně a že to moji rodiče dělat neměli, obzvlášť ne v tu chvíli, kdy už to nebylo „výchovný,” ale z frustrace, na druhou stranu, snad každej rodič něco prostě posere, asi není možný svý děti aspoň trošku nezprznit, takže jim to nevyčítam. -Mě to nijak neuráží, nerozumím proč by se měl OP za něco stydět. -Zákony mají být jasné a jednoznačné. -Kundami bych tedy nazval ty, co zákony tvoří v takové kvalitě. -Jinak by mě zajímalo, zda se nebojíte výpadku příjmu? -Dá se skutečně počítat s tím, že lidi produkt chtějí a kupují a vy budete mít z čeho hypotéku zaplatit? -Kočnerův obludný svět. -Kam Slovensko posune proces s vrahy novináře Kuciaka? -Na Slovensku začíná hlavní líčení v procesu se čtveřicí obžalovanou z vraždy Jána Kuciaka a Martiny Kušnírové. -Smrt investigativního reportéra a jeho partnerky změnila Slovensko. -Rozpohybovala občanskou společnost, ale odhalila i praktiky obviněného podnikatele Mariána Kočnera a jeho napojení na špičky slovenské politiky a justice. -Jak zásadní zlom bude proces pro Slovensko znamenat? -Za tohle obrovsky můžou novináři. -Jak je možný, že tahle petice měla řádově větší mediální pozornost, než protipetice děkanů všech lékařských fakult, která vyšla o den, dva později? -Kdepak, nechali se zblbnout a společnost to odskákala. -Na české televizi se umíralo s covidem ještě před půl rokem. -Další vláda končí a zákon o zapojení obcí do výběru úložiště nikde -Návrh ministra Karla Havlíčka potřebuje zásadně přepracovat -Vláda premiéra Andreje Babiše je u konce a zákon, který měl zajistit respektování zájmů obcí a jejich občanů při výběru a povolování hlubinného úložiště vysoko radioaktivních odpadů, stále neexistuje. -Legislativní rada vlády přerušila projednávání návrhu, který vládě po letitých průtazích předložilo Ministerstvo průmyslu a obchodu. -Jeho obsah je však ve vážném rozporu nejen s dotčenými obcemi sdruženými v Platformě proti hlubinnému úložišti, ale i se Svazem měst a obcí ČR. -Samosprávy od zákona očekávají výraznější posílení svých možností při rozhodování o úložišti, jaké je starostům slibováno od roku 2011, kdy první práce na této legislativě začaly a které požaduje český atomový zákon i evropská směrnice. -Očekáváme, že nová vláda v souladu se svojí koaliční smlouvou návrh přepracuje ve spolupráci s obcemi. -Návrhu zákona od ministra Karla Havlíčka, který má Platforma k dispozici, vytýkají obce zejména: Navržená míra zapojení obcí a veřejnosti do procesu rozhodování o výběru lokality pro úložiště je nedostatečná a nemůže zajistit respektování zájmů obcí a jejich občanů. -Skutečně efektivní může být jen tehdy, pokud obce či veřejnost mohou ovlivnit, zda v dané lokalitě vůbec bude proces pokračovat. -To lze zajistit uložením povinnosti Správy úložišť vyžádat si před zahájením konkrétního řízení souhlas dotčených obcí. -Předložený návrh věcného záměru téměř zcela opomíjí zapojení veřejnosti a dělá z občanů obcí prakticky pouhé statisty při povolovacích řízeních. -V návrhu chybí systémové nastavení kompenzací pro obce pro celý proces vyhledávání a výběru lokality pro úložiště, jeho povolování a provoz. -Představitelé obcí reálně podle dnešních právních norem nemají příliš možností, jak obhajovat zájmy svých občanů při hledání místa pro úložiště. -Pouze v některých povolovacích řízeních mohou podat své připomínky nebo se odvolat, rozhoduje však úřad nebo ministr, v jehož zájmu je povolení vydat. -Případná žaloba nemá odkladný účinek na provádění průzkumných či stavebních prací. -Spolurozhodování samospráv, které Platforma žádá, je princip běžně používaný v mnoha demokraticky vyspělých zemích a rozhodně pak v těch, kde již pokročili v povolování úložiště, jako ve Švédsku nebo Finsku. -Příprava zákona je mimo to dalším selháním státní správy, která si na přípravu legislativy najímá externí právní kanceláře. -V tomto případě jde o smlouvu s advokátní kanceláří HAVEL & PARTNERS s.r.o., kterou uzavřelo SÚRAO a navazující na smlouvy s advokátem Janem Zemánkem. -Celková suma za tyto práce má podle registru smluv činit takřka 4 miliony korun. -Antonín Seknička, místostarosta obce Cejle z lokality Hrádek a mluvčí Platformy proti hlubinnému úložišti řekl: Po ministrech průmyslu, kteří narovnání postavení samospráv vůči státním úřadům při hledání hlubinného úložiště vysoce radioaktivních odpadů jen odsouvali na své nástupce, čekáme od nové vlády výraznější obrat. -Nabízíme k tomu i pomocnou ruku. -Za podporu také děkujeme Svazu měst a obcí, který problematiku nedostatečných práv obcí u tak zásadní stavby vnímá obdobně, jako přímo dotčené obce ve vybraných lokalitách. -Platforma proti hlubinnému úložišti sdružuje 51 členů (35 obcí a měst a 16 spolků) za účelem prosazení změny v přístupu státu k nakládání s vyhořelým jaderným palivem a dalšími radioaktivními odpady, který se nebude omezovat jen na hlubinné úložiště. -Platforma dále prosazuje, aby rozhodnutí o výběru lokality pro případné ukládání bylo podmíněno předchozím souhlasem dotčených obcí. -Herce Johna Goodmana (69) donutil k hubnutí strach: Shodil 90 kg. -Ačkoliv na svém životním stylu dlouhá léta neměl potřebu nic měnit, nakonec ho vystrašili lékaři. -Řekli mu, že pokud nezhubne, zemře. -A to zabralo. -Goodman postupně shodil 90 kilogramů, což je polovina jeho původní váhy 180 kg, informuje deník The Sun. -Svou novou postavou se pochlubil v Los Angeles při premiéře animovaného seriálu The Freak Brothers. -Z tlouštíka ze sitcomu Roseanne je úplně jiný člověk! -John rád vtipkoval, že ho přátelé a rodina prosili, aby zhubl, protože jeho velké tělo způsobovalo praskání nábytku. -„Všechno jsem si strkal do úst,“ řekl herec v roce 2018 v rozhovoru pro AARP. -Tentokrát jsem to chtěl udělat pomalu. -Hýbat se, cvičit. -Dostávám se do věku, kdy si už nemůžu dovolit sedět na místě, řekl pro ABC Goodman, jehož proměna je úžasná. -Taky záleží, jakej boss a v čem chceš slovo "boss" použít. -Jestli jde o nějakej text na platformě, která očekává čtenáře z herního prostředí, vůbec bych to nepřekládal. -Pokud by se jednalo třeba o text formální a třeba vysokoškolský, asi bych hledal, jak bosse spíš popsat nebo vysvětlit. -Ke všemu existuje víc druhů bossů. -Třeba hra jako Dark Souls apod. má bossů několik, že jo, a tak je "boss" něco jako pán / vládce daný úrovně, a pak je tam finální boss... -V mnoha hrách jsou ukrytí bossové (super boss, hidden boss), kteří nejsou vůbec zapotřebí porazit, aby hra nebo úroveň byla dokončena, ale často jsou ještě silnější než standardní boss. -Pak jsou třeba hry typu Half-Life, kde bossové jsou, ale hráč se s nimi přímo neutká (Tentacle, Gargantua), a dají se tak vůbec nějak nazvat? -No a pak jsou mini bossové. -Ono paušálně jednoslovně nějak přeložit boss prostě asi ani nejde, čeština a ani jiné jazyky to nějak neřeší (zajímavá je třeba tak jen katalánština, která bosse překládá jako finálního protivníka). -Je to zkrátka pro příběh nebo hru obecně významný, počítačem ovládaný, protivník silnější než všichni předchozí a hlídající dokončení nějaké úrovně nebo úkolu. -Po pohřešované Petře z Londýna pátrá celý svět. -Zapojila se i česká policie. -Po pohřešované Petře Srncové pátrá britská policie od 3. prosince. -Do hledání se zapojila i česká policie. -Ta 32letou ženu z Uherskohradišťska hledá od 7. prosince. -Skrz Interpol zároveň pomáhá britské policii. -Češku Petru Srncovou viděli její kolegové naposledy 28. listopadu. -Britská policie po ní pátrá od 3. prosince. -Interpol kvůli ní vydal takzvaný žlutý oběžník. -Po Petře tedy pátrá celý svět. -„Česká policie úzce spolupracuje s britskou policií,“ potvrdila policejní mluvčí Kateřina Rendlová. -„Sdílíme spolu informace k případu,“ dodala. -Vyhlášené pátrání po Petře se objevilo už i na policejním webu. -Podle něj je 168 centimetrů vysoká, je hubená a má hnědé oči a dlouhé rovné vlasy stejné barvy. -Pocházet by měla z Uherskohradišťska. -Petra pracovala jako zdravotní sestra v jedné z londýnských nemocnic. -Přátelé a kolegové se o ni obávají, takové zmizení je pro ni totiž velmi neobvyklé -Do pátrání po Petře se zapojila i tamní poslankyně Harriet Harmanová. -Zapojila se do vyvěšování letáků s obličejem Petry. -„Máme o ni obrovské obavy,“ řekla na sobotní tiskové konferenci. -V souvislosti se zmizením už britští policisté zadrželi jednoho podezřelého. -Není ale jasné, o koho jde a co měl s Petrou společného. -Předvánoční Česko terorizuje Agent Tesla. -Zatímco data v říjnu ukazovala na mírný pokles útočných kampaní, minulý měsíc s blížícím se koncem roku útoky výrazně zesílily. -Velkou kampaň v souvislosti s Agent Tesla jsme zaznamenali 18. listopadu. -Útoky jsou cíleně zaměřené na Českou republiku. -Strategie útočníků zůstává prozatím stejná. -Infikovaná příloha v e-mailu má v uživateli vzbudit pozornost názvem, který odkazuje na platby a finanční transakce. -Zatímco minulý měsíc měla nebezpečná příloha v názvu slovo faktura, tentokrát byla označená jako Kopie oprav účtenky za 11,2021...exe," uvedl Martin Jirkal z Esetu. -Spyware obsahuje funkce, které skenují internetové prohlížeče a další programy, například e-mailové klienty Microsoft Outlook, Mozilla Thunderbird nebo Yandex. -Škodlivý kód aktivně vyhledává uložené přihlašovací údaje, které následně odesílá útočníkům. -Poslední silná kampaň proběhla v Česku na přelomu srpna a září a s blížícími se svátky a koncem roku aktivita útočníků opět roste. -V listopadu zůstal aktivní i spyware Formbook. -Na rozdíl od Agent Tesla se útočníci v tomto případě nezaměřují konkrétně na Českou republiku a bezpečnostní specialisté v listopadu zachytili spíše kampaně s globálním dosahem. -V porovnání s říjnovými daty Formbook v listopadu mírně oslabil, nadále ale stojí za bezmála pětinou všech detekcí. -Útoky probíhaly kontinuálně po celý měsíc se zvýšenou aktivitou ve dnech 3., 10. a 15. listopadu. -Formbook obsahoval nejčastěji přílohu s příponou .exe, která nesla název REQUEST FOR SPECIFICATION. -Nadále se ale objevuje i název účtenka. -Příloha v češtině může být pak pro českého uživatele daleko nebezpečnější. -Významný pokles a utlumení aktivity evidují bezpečnostní analytici u programu Fareit, který stál za 1,6 procenta útoků a v Čechách neměl několik posledních měsíců žádnou větší útočnou kampaň. -Ta dnešní demoška chytráků, co nepotřebujou kyslík, protože kyslík je pro vakcinované blbce. -Průvod Prahou byl větší, než média uvádějí. -Podle záběrů průvodu po nábřeží a podle mých zkušeností demonstranta se nebojím odhadu kolem 10 tisíc lidí. -Lidé v průvodu naplnili nábřeží i most a protilehlou nábřežní komunikaci. -To je znamená, že lidí opravdu hodně. -K průvodu asi čtyřtisícovky demonstrantů se spontánně přidává nevídané množství kolemjdoucích. -Tvrdím, že se tu rodí něco nového, píše Radek Mokrý. -Že trvalá nespokojenost několika velkých skupin či vrstev obyvatelstva vedla k tomu, že našli společnou řeč. -Jen samotní antivaxeři, antirouškaři a spol. by takhle velký průvod nedokázali naplnit ani zaplatit. -Akce spolku Chcípl pes mají vzestupnou popularitu, připomíná mi to Milión chvilek pro demokracii naruby. -Někdy mám takový dojem, že si pronajímají i stejné pódium a techniku. -Těžko říct, jaké hnutí by se z téhle nespokojenosti dalo uplácat, záleží nejen na přísunu peněz, ale i na tom, zda se trojdávková vakcína od Pfizeru nestane čtver a více dávkovou. -Rozhodně to nebude hnutí levicové ani středové, to se vsaďte. -Trojdávková vakcína se téměř zcela určitě stane vícedávkovou, protože je zjevné, že se budeme muset nechat přeočkovávat každého půl roku. -Jsem velmi rád, že nás vakcíny zachránily. -Geniální akce vědců, na niž je lidstvo právem hrdo. -Konec patu, Bulharsko má nového premiéra hlásajícího změnu -Bulharský prezident Rumen Radev pověřil sestavením nové vlády Kirila Petkova z protikorupčního hnutí Pokračujeme ve změně, které vyhrálo listopadové volby. -Tomu se už podařilo sestavit vládu široké koalice, která by se měla ujmout úřadu během pár dní. -Politická krize v zemi trvá už od dubna, kdy minulá vláda pod tíhou protikorupčních protestů prohrála volby. -Vítězné strany hlásající boj proti úplatkářství a zneužívání moci se však nedokázaly domluvit, takže následovaly ještě dvoje předčasné volby. -Co dělají kočky, když se nikdo nedívá? -Z „tajně“ pořízených záběrů je globální hit. -V Británii o víkendu narůstaly obavy ohledně osudu 32leté Češky, po které už několik dní pátrá londýnská policie. -Petra Srncová byla naposledy spatřena před dvěma týdny, když jela z práce domů na jih britské metropole. -Po informacích o pracovnici dětské nemocnice volá kromě policie i její dosavadní zaměstnavatel či poslankyně zastupující část Londýna, kde bydlela. -„Zmizení Petry se vůbec nepodobá jejímu chování a začínáme o ni mít velké starosti,“ uvedla v sobotním videu Lucy O'Connorová z policejního oddělení ve čtvrti Lambeth, kde Srncová pracovala. -„Její rodina v České republice o ni má taky velké obavy a zkrátka chce vědět, kde je,“ pokračovala. -Pohřešovaná Češka podle ní v neděli 28. listopadu okolo 19:45 odešla z práce a zamířila domů do čtvrti Camberwell. -Naposledy byla údajně spatřena v autobuse asi o půl hodiny později. -Její zmizení nahlásil 3. prosince někdo z jejích spolupracovníků. -Srncová podle britských médií působila jako „asistentka zdravotních sester“ v dětské nemocnici Evelina London Children's Hospital, která patří pod nemocniční sdružení Guy's and St Thomas'. -„Máme nesmírné obavy o naši drahou kolegyni Petru, která je nezvěstná,“ uvedla v twitterovém příspěvku skupina zdravotnických zařízení. -„Chtěli bychom pobídnout kohokoli, kdo by mohl mít jakékoli informace, které by ji mohly pomoct najít, aby kontaktoval policii,“ pokračovalo sdělení. -Šumperskou amatérskou malířku dětských pokojů Zdeňku Dvořákovou Kocourkovou (a také krajskou pirátskou radní) udal anonym, že svými výmalbami porušuje autorská práva. -Soud ale uznal, že malby Krtečka v šumperských pokojících zákon neporušují. -V Ústí nad Labem zela měsíc díra v silnici v podobě kanálu bez poklopu. -Šlo o život. -Magistrát odkazoval stížnosti na ŘSD, kterému silnice patří, a protože to prý nereagovalo, díra zela dál. -Nakonec si úřady vyjasnily zodpovědnosti a vlastnictví a po měsíci začalo ŘSD „situaci intenzivně řešit“. diff --git a/spaces/zestyoreo/vtryon/util/util.py b/spaces/zestyoreo/vtryon/util/util.py deleted file mode 100644 index df1edbe65f37403dfb15625cbafa54fd41f1cd6d..0000000000000000000000000000000000000000 --- a/spaces/zestyoreo/vtryon/util/util.py +++ /dev/null @@ -1,94 +0,0 @@ -from __future__ import print_function - -import torch -from PIL import Image -import numpy as np -import os - -def tensor2im(image_tensor, imtype=np.uint8, normalize=True): - if isinstance(image_tensor, list): - image_numpy = [] - for i in range(len(image_tensor)): - image_numpy.append(tensor2im(image_tensor[i], imtype, normalize)) - return image_numpy - image_numpy = image_tensor.cpu().float().numpy() - - image_numpy = (image_numpy + 1) / 2.0 - image_numpy = np.clip(image_numpy, 0, 1) - if image_numpy.shape[2] == 1 or image_numpy.shape[2] > 3: - image_numpy = image_numpy[:,:,0] - - return image_numpy - -def tensor2label(label_tensor, n_label, imtype=np.uint8): - if n_label == 0: - return tensor2im(label_tensor, imtype) - label_tensor = label_tensor.cpu().float() - if label_tensor.size()[0] > 1: - label_tensor = label_tensor.max(0, keepdim=True)[1] - label_tensor = Colorize(n_label)(label_tensor) - label_numpy = label_tensor.numpy() - label_numpy = label_numpy / 255.0 - - return label_numpy - -def save_image(image_numpy, image_path): - image_pil = Image.fromarray(image_numpy) - image_pil.save(image_path) - -def mkdirs(paths): - if isinstance(paths, list) and not isinstance(paths, str): - for path in paths: - mkdir(path) - else: - mkdir(paths) - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def uint82bin(n, count=8): - """returns the binary of integer n, count refers to amount of bits""" - return ''.join([str((n >> y) & 1) for y in range(count-1, -1, -1)]) - -def labelcolormap(N): - if N == 35: # cityscape - cmap = np.array([( 0, 0, 0), ( 0, 0, 0), ( 0, 0, 0), ( 0, 0, 0), ( 0, 0, 0), (111, 74, 0), ( 81, 0, 81), - (128, 64,128), (244, 35,232), (250,170,160), (230,150,140), ( 70, 70, 70), (102,102,156), (190,153,153), - (180,165,180), (150,100,100), (150,120, 90), (153,153,153), (153,153,153), (250,170, 30), (220,220, 0), - (107,142, 35), (152,251,152), ( 70,130,180), (220, 20, 60), (255, 0, 0), ( 0, 0,142), ( 0, 0, 70), - ( 0, 60,100), ( 0, 0, 90), ( 0, 0,110), ( 0, 80,100), ( 0, 0,230), (119, 11, 32), ( 0, 0,142)], - dtype=np.uint8) - else: - cmap = np.zeros((N, 3), dtype=np.uint8) - for i in range(N): - r, g, b = 0, 0, 0 - id = i - for j in range(7): - str_id = uint82bin(id) - r = r ^ (np.uint8(str_id[-1]) << (7-j)) - g = g ^ (np.uint8(str_id[-2]) << (7-j)) - b = b ^ (np.uint8(str_id[-3]) << (7-j)) - id = id >> 3 - cmap[i, 0] = r - cmap[i, 1] = g - cmap[i, 2] = b - return cmap - -class Colorize(object): - def __init__(self, n=35): - self.cmap = labelcolormap(n) - self.cmap = torch.from_numpy(self.cmap[:n]) - - def __call__(self, gray_image): - size = gray_image.size() - color_image = torch.ByteTensor(3, size[1], size[2]).fill_(0) - - for label in range(0, len(self.cmap)): - mask = (label == gray_image[0]).cpu() - color_image[0][mask] = self.cmap[label][0] - color_image[1][mask] = self.cmap[label][1] - color_image[2][mask] = self.cmap[label][2] - - return color_image diff --git a/spaces/zhtet/RegBotBeta/utils/util.py b/spaces/zhtet/RegBotBeta/utils/util.py deleted file mode 100644 index 131ecf39e8bb03886bd443f785e405166992a341..0000000000000000000000000000000000000000 --- a/spaces/zhtet/RegBotBeta/utils/util.py +++ /dev/null @@ -1,15 +0,0 @@ -import requests - - -def validate(token: str): - api_endpoint = "https://api.openai.com/v1/chat/completions" - api_key = token - - headers = {"Content-Type": "application/json", "Authorization": f"Bearer {api_key}"} - - messages = [{"role": "user", "content": "Say this is a test!"}] - - data = {"model": "gpt-3.5-turbo", "messages": messages} - - response = requests.post(api_endpoint, json=data, headers=headers) - return response