diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Photoshop Cs 9 Free Free Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Photoshop Cs 9 Free Free Download.md
deleted file mode 100644
index 95a05c3f95af80d010f13e994a71ce0fabf5d238..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Photoshop Cs 9 Free Free Download.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
photoshop is considered one of the most powerful image editing software packages available. it is used by digital photographers, graphic designers, and just about anyone who has an interest in editing images. photoshop has the capability to change almost any aspect of a photo, making it an impressive software package.
-Download ►►► https://imgfil.com/2uxX4P
adobe photoshop is one of the most popular software programs available. it is used for a variety of tasks including retouching your images, creating graphics and images, creating websites, and much more. the good thing about photoshop is that the program is very easy to use and has a lot of features that allow users to edit any type of file or project. the application is also capable of handling enormous files.
-as you know, photoshop is the top photoshopping software for the mac. but it has been made available for os x. the demo version of photoshop cs6 can be downloaded through the website for mac os x. the latest versions of photoshop elements and photoshop cs6 are on sale for $39.99 and $179.99 respectively. so if you are not the most technology savvy person, fear not. the software is fairly simple to use and it can be installed on a mac os x 10.6 and up or windows xp and up.
-the latest version of photoshop cs6 (adobe photoshop cs6 + adobe photoshop cs6 extended + adobe photoshop cs6 essentials) are on sale for $119.99, $399.99, $219.99 respectively. they can be downloaded directly at the adobe website, otherwise you can also choose to pay through the adobe acrobat connect site. if you have adobe acrobat you can get the software to run on your computer. you also get access to acrobat reader which lets you read, save, print, and annotate pdf files. the adobe photoshop cs6 gives you the ability to work and create images on different layers or files which allows you to combine, edit or reposition any number of images and text on layers. you can also crop, adjust or rotate the layer contents. it lets you share your finished image or component with others. you can read the help files or you can simply make use of the extensive menus to access the software. the whole software is free of all adware, malware or spyware.
- 899543212bIf you are looking for a way to hack into any Facebook account, you have come to the right place. In this article, we will show you how to use 007 Facebook Hack v1.0 with Full Cracked, a powerful and easy-to-use tool that can help you spy on anyone's Facebook activities.
-007 Facebook Hack v1.0 with Full Cracked is a software that allows you to hack any Facebook account by simply entering the email address or username of the target. You don't need to know the password or any other details of the account. The software will automatically retrieve the login information and display it on your screen.
-Download Zip ⚙⚙⚙ https://imgfil.com/2uy0zG
With 007 Facebook Hack v1.0 with Full Cracked, you can access the private messages, photos, videos, friends list, wall posts, comments, likes, groups, events, and more of any Facebook user. You can also change the password, profile picture, status, and other settings of the hacked account.
-There are many reasons why you might want to use 007 Facebook Hack v1.0 with Full Cracked. For example, you might want to:
-Using 007 Facebook Hack v1.0 with Full Cracked is very simple and straightforward. Just follow these steps:
-You can download 007 Facebook Hack v1.0 with Full Cracked for free from the link below. The software is safe and virus-free. It works on Windows XP, Vista, 7, 8, 10 and Mac OS X. It is compatible with all browsers and devices that support Facebook.
-Download 007 Facebook Hack v1.0 with Full Cracked Here
-007 Facebook Hack v1.0 with Full Cracked is a powerful and easy-to-use tool that can help you hack any Facebook account in minutes. You can use it for various purposes such as monitoring, recovering, protecting, or having fun with your Facebook accounts. You can download it for free from the link above and enjoy hacking!
-Before you download and use 007 Facebook Hack v1.0 with Full Cracked, you might be wondering if it is legal or not. The answer is: it depends. Hacking someone's Facebook account without their consent is illegal and unethical in most countries. You could face legal consequences if you are caught or reported by the victim or Facebook. Therefore, we do not recommend or endorse using this tool for malicious purposes.
-However, there are some situations where using 007 Facebook Hack v1.0 with Full Cracked might be legal or acceptable. For example, if you are hacking your own account that you lost access to, or if you have the permission of the account owner to hack their account for educational or testing purposes. In these cases, you are not violating anyone's privacy or rights, and you are using the tool responsibly and ethically.
- -007 Facebook Hack v1.0 with Full Cracked has many advantages over other hacking tools available on the internet. Some of them are:
-Despite its many advantages, 007 Facebook Hack v1.0 with Full Cracked also has some disadvantages that you should be aware of before using it. Some of them are:
-Downloading 007 Facebook Hack v1.0 with Full Cracked is very easy and fast. You don't need to register or fill any surveys to get this tool. You just need to follow these simple steps:
-Note: Some antivirus programs might detect 007 Facebook Hack v1.0 with Full Cracked as a virus or malware. This is a false positive and you can safely ignore it. The software is clean and harmless.
-007 Facebook Hack v1.0 with Full Cracked is constantly updated by its developers to ensure its functionality and compatibility with the latest Facebook updates and security measures. You don't need to manually update this tool as it will automatically check for updates and download them whenever they are available.
-However, if you want to manually check for updates or download the latest version of 007 Facebook Hack v1.0 with Full Cracked, you can do so by following these steps:
-If you want to uninstall 007 Facebook Hack v1.0 with Full Cracked from your computer or device, you can do so by following these steps:
-If you are looking for other ways to hack Facebook accounts, you might want to consider some of the alternatives to 007 Facebook Hack v1.0 with Full Cracked. Some of them are:
-However, these methods are not as easy or reliable as 007 Facebook Hack v1.0 with Full Cracked. They require more time, effort, skill, and resources to execute. They also have more risks and limitations than 007 Facebook Hack v1.0 with Full Cracked.
-Many users have tried and tested 007 Facebook Hack v1.0 with Full Cracked and have shared their positive feedback and reviews about this tool. Here are some of the testimonials from real users who have used 007 Facebook Hack v1.0 with Full Cracked:
---"I was able to hack my girlfriend's Facebook account and found out that she was cheating on me with my best friend. Thanks to 007 Facebook Hack v1.0 with Full Cracked, I was able to confront them and end the relationship." - John, USA
-
--"I forgot my Facebook password and I couldn't access my email or phone number to reset it. I was desperate to get back into my account because I had important messages and photos there. Luckily, I found 007 Facebook Hack v1.0 with Full Cracked and it helped me recover my account in minutes." - Lisa, UK
-
--"I wanted to prank my friend by changing his profile picture and status to something funny. I used 007 Facebook Hack v1.0 with Full Cracked to hack his account and it worked like a charm. He was so confused and angry when he saw his account. It was hilarious." - Kevin, Canada
-
007 Facebook Hack v1.0 with Full Cracked is a powerful and easy-to-use tool that can help you hack any Facebook account in minutes. You can use it for various purposes such as monitoring, recovering, protecting, or having fun with your Facebook accounts. You can download it for free from the link below and enjoy hacking!
-However, you should also be aware of the legal and ethical implications of using this tool. Hacking someone's Facebook account without their consent is illegal and unethical in most cases. You could face legal consequences if you are caught or reported by the victim or Facebook. Therefore, we do not recommend or endorse using this tool for malicious purposes.
-You should also be careful of the risks and dangers of using this tool. Even if you are hacking your own account or someone else's account with their permission, you could still expose yourself or them to potential threats from hackers, scammers, stalkers, or other malicious actors who might try to access or misuse the hacked account.
-Finally, you should also know that this tool is not guaranteed or foolproof. The tool might not work on some accounts that have strong security measures or verification methods in place. The tool might also fail to hack the account if the target changes their password or email address during the hacking process.
-Therefore, you should use this tool responsibly and ethically, and at your own risk. We hope you found this article helpful and informative. If you have any questions or comments, please feel free to leave them below. Happy hacking!
3cee63e6c2Do you love playing multiplayer games with your friends? Do you want to experience an exciting and addictive game that features four levels, each with unique objectives and strategies? If you answered yes, then you should try Bed Wars. Bed Wars is a mobile game that lets you team up with other players and protect your bed from being destroyed by your enemies. You can also collect resources, build bridges, upgrade weapons, and attack other beds. Sounds fun, right?
-But what if we tell you that you can make this game even more fun by downloading Bed Wars Mod APK? This is a modified version of the game that gives you unlimited money and gcubes, which are the in-game currencies. With these resources, you can buy anything you want in the game without worrying about running out. You can also unlock all the skins, items, maps, modes, and more. This way, you can enjoy Bed Wars to the fullest.
-Download Zip ——— https://urlin.us/2uSU8h
So how do you download Bed Wars Mod APK on your Android device? Don't worry, we got you covered. In this article, we will show you how to download and install this amazing modded game in just a few simple steps. We will also give you some tips on how to play Bed Wars Mod APK and have a blast with your friends. Let's get started!
-Bed Wars is a popular mobile game developed by Blockman GO Studio. It is inspired by the Minecraft mini-game of the same name. The game has four levels: Solo, Duo, Trio, and Squad.
In each level, you will be assigned to a team with a color. Your team will have a bed that you need to protect from being destroyed by other teams. If your bed is destroyed, you will not be able to respawn and you will be eliminated from the game. The last team standing wins the game.
-To protect your bed, you need to collect resources from the islands. There are three types of resources: iron, gold, and diamonds. Iron and gold can be used to buy items from the shop, such as blocks, weapons, armor, tools, and potions. Diamonds can be used to upgrade your team's abilities, such as sharpness, protection, haste, and heal pool.
-You can also build bridges to connect your island to other islands. This way, you can access more resources, attack other beds, or defend your own bed. But be careful, as other teams can also use your bridges to invade your island. You need to be strategic and cooperative with your teammates to win the game.
-Bed Wars is a fun and addictive game that you can play for hours with your friends. However, it can also be frustrating and challenging if you don't have enough money and gcubes to buy the items and upgrades you need. You might also get bored of playing the same maps and modes over and over again.
-That's why downloading Bed Wars Mod APK is a great idea. This is a modified version of the game that gives you unlimited money and gcubes, which are the in-game currencies. With these resources, you can buy anything you want in the game without worrying about running out. You can also unlock all the skins, items, maps, modes, and more. This way, you can enjoy Bed Wars to the fullest.
-Some of the features of Bed Wars Mod APK are:
-As you can see, Bed Wars Mod APK is a must-have for any fan of the game. It will make your gaming experience more fun and exciting. You will be able to customize your character, equip yourself with the best weapons and armor, explore different maps and modes, and dominate the game with your friends.
-How to install bed wars mod apk on android
-Bed wars mod apk unlimited money and gcubes download
-Bed wars mod apk latest version free download
-How to play bed wars mod apk online with friends
-Bed wars mod apk solo, duo, trio and squad modes
-How to get bed wars mod apk for pc
-Bed wars mod apk hack and cheats
-Bed wars mod apk no root required
-Bed wars mod apk features and gameplay
-How to update bed wars mod apk to the newest version
-Bed wars mod apk review and rating
-Bed wars mod apk download link and instructions
-How to uninstall bed wars mod apk from your device
-Bed wars mod apk tips and tricks
-Bed wars mod apk vs original bed wars game
-How to fix bed wars mod apk not working or crashing
-Bed wars mod apk best strategies and tactics
-Bed wars mod apk compatible devices and requirements
-Bed wars mod apk alternatives and similar games
-How to contact bed wars mod apk developer and support
-How to join bed wars mod apk community and forums
-Bed wars mod apk pros and cons
-Bed wars mod apk bugs and glitches
-Bed wars mod apk custom maps and skins
-How to create your own bed wars mod apk server
-How to backup and restore bed wars mod apk data
-Bed wars mod apk frequently asked questions and answers
-Bed wars mod apk gameplay videos and screenshots
-How to earn free gcubes in bed wars mod apk
-Bed wars mod apk changelog and updates history
-How to report bed wars mod apk issues and feedback
-Bed wars mod apk privacy policy and terms of service
-How to enable bed wars mod apk notifications and permissions
-Bed wars mod apk achievements and leaderboards
-How to share bed wars mod apk with your friends
-Bed wars mod apk size and download time
-How to optimize bed wars mod apk performance and battery usage
-Bed wars mod apk sound effects and music settings
-How to customize bed wars mod apk controls and interface
-Bed wars mod apk languages and translations
Now that you know why you should download Bed Wars Mod APK, you might be wondering how to do it. Don't worry, it's very easy and simple. All you need is an Android device with at least 4 GB of RAM and 100 MB of free storage space. Then, follow these steps:
-The first thing you need to do is to allow unknown apps on your Android device. This means that you will be able to install apps that are not from the Google Play Store. To do this, go to your device settings and look for the security or privacy option. Then, find the unknown sources or install unknown apps option and enable it.
-This will allow you to install Bed Wars Mod APK on your device without any problems. However, make sure that you only download apps from trusted sources and websites. Otherwise, you might end up installing malware or viruses on your device.
-The next thing you need to do is to install an Android file manager app on your device. This is an app that will help you find and manage the files on your device. You will need this app to locate and install the Bed Wars Mod APK file that you will download later.
-There are many file manager apps that you can choose from, such as ES File Explorer, Astro File Manager, or Solid Explorer. You can download any of them from the Google Play Store for free. Once you have installed a file manager app on your device, open it and grant it the necessary permissions.
-The next step is to download the Bed Wars Mod APK file from your Android device. To do this, open your web browser and go to this link: . This is a reputable website where you can download the latest version of Bed Wars Mod APK for free.
-Once you are on the website, scroll down until you see the download button. Tap on it and wait for the download to start. The file size is about 98 MB, so it might take a few minutes depending on your internet speed.
-If you prefer, you can also download the Bed Wars Mod APK file from your computer and transfer it to your Android device via USB cable. This might be faster and more convenient for some users. To do this, follow these steps:
-The final step is to install the Bed Wars Mod APK file on your device. To do this, follow these steps:
-Now that you have installed Bed Wars Mod APK on your device, you might be wondering how to play it. Don't worry, it's very easy and simple. All you need to do is follow these steps:
-Bed Wars is a fun and addictive game that you can play with your friends. However, it can also be frustrating and challenging if you don't have enough money and gcubes to buy the items and upgrades you need. That's why downloading Bed Wars Mod APK is a great idea. This is a modified version of the game that gives you unlimited money and gcubes, which are the in-game currencies. With these resources, you can buy anything you want in the game without worrying about running out. You can also unlock all the skins, items, maps, modes, and more. This way, you can enjoy Bed Wars to the fullest.
-In this article, we showed you how to download and install Bed Wars Mod APK on your Android device in just a few simple steps. We also gave you some tips on how to play Bed Wars Mod APK and have a blast with your friends. We hope that you found this article helpful and informative. If you did, please share it with your friends who might also be interested in playing Bed Wars Mod APK. Thank you for reading!
-A: Yes, Bed Wars Mod APK is safe to download as long as you download it from a trusted source and website. However, make sure that you scan the APK file with an antivirus app before installing it on your device. This way, you can avoid any potential malware or viruses that might harm your device.
-A: No, you don't need to root your device to use Bed Wars Mod APK. This modded game works fine on any Android device without requiring any root access or permissions. Just follow the steps above and enjoy Bed Wars Mod APK without any hassle.
-A: Yes, you can play Bed Wars Mod APK online with other players who are also using the modded version of the game. However, you might not be able to play with players who are using the original version of the game from the Google Play Store. This is because the modded game has different features and settings that might not be compatible with the original game. Therefore, we recommend that you play Bed Wars Mod APK with your friends who are also using the same modded game.
-A: To update Bed Wars Mod APK, you need to download the latest version of the APK file from the same website where you downloaded it before. Then, you need to uninstall the previous version of the game from your device and install the new version. This way, you can enjoy the latest features and improvements of Bed Wars Mod APK.
-A: Here are some tips and tricks for playing Bed Wars Mod APK:
-That's it! You have successfully downloaded and installed Bed Wars Mod APK on your Android device. You have also learned how to play Bed Wars Mod APK and some tips and tricks for playing it. We hope that you enjoyed this article and found it helpful and informative. If you did, please share it with your friends who might also be interested in playing Bed Wars Mod APK. Thank you for reading!
197e85843dIf you are looking for a fun and engaging card game that can challenge your mind and improve your skills, you should try cribbage. Cribbage is a classic card game that has been played for centuries by people of all ages and backgrounds. It is easy to learn, but hard to master, and it offers endless possibilities for strategy and variation. In this article, we will show you how to download and play cribbage games on your iPhone, as well as how to improve your skills and strategy in this fascinating game.
-Cribbage is a card game that originated in England in the 17th century. It was invented by Sir John Suckling, a poet and gambler who modified an older game called Noddy. The game is played with a standard 52-card deck and a special board with holes and pegs that are used to keep score. The objective of the game is to be the first player to reach 121 points by making combinations of cards that add up to 15, pairs, runs, flushes, or nobs (the jack of the same suit as the starter card).
-Download File > https://urlin.us/2uSRUl
The game is played by two or three players, or by four players in two teams. Each player is dealt six cards (five cards in a three-player game) and must discard two cards face down to form the crib, which belongs to the dealer. The non-dealer cuts the deck and reveals the top card, which is called the starter or the cut. The players then take turns playing one card each, starting with the non-dealer, and announcing the running total of the cards' values. The cards are worth their face value, except for face cards which are worth 10, and aces which are worth 1. The player who plays a card that makes the total exactly 15 scores two points, called "fifteen two". The player who plays a card that makes the total 31 scores two points, called "thirty-one for two". If a player cannot play a card without going over 31, they say "go" and the other player continues until they reach 31 or cannot play either. The player who played the last card before a go or 31 scores one point, called "one for last".
-After all the cards have been played, the players count their hands in turn, starting with the non-dealer. The hand consists of four cards plus the starter card. The players score points for any combinations of cards that add up to 15, pairs (two points), three of a kind (six points), four of a kind (twelve points), runs (one point per card), flushes (four points for all five cards of the same suit, or five points if the crib also matches), and nobs (one point for having the jack of the same suit as the starter). The dealer then counts their hand, followed by the crib. The crib can only score points for 15s, pairs, runs, flushes, and nobs.
-The game continues until one player reaches 121 points or more. The player who reaches 121 points first wins the game. If both players reach 121 points in the age, lowball cribbage, and back up 10 cribbage. Each mode has its own rules and challenges that will test your skills and strategy. -
Ultimate Cribbage is a paid app that you can download from the App Store for $2.99. It is compatible with iPhone, iPad, and iPod touch devices running iOS 9.0 or later.
-Cribbage Classic app for iPad and iPhone
-Ultimate Cribbage: Classic card game with different modes
-Cribbage: The Best Card Game by FIOGONIA LIMITED
-How to play cribbage on your iPhone with friends
-Cribbage tips and tricks to improve your skills
-Best cribbage apps for iPhone in 2023
-Cribbage Club subscription for ad-free gameplay
-Cribbage Pegboard app to track your score
-Cribbage variants: Classic, Muggins, and Shotgun
-Cribbage rules and scoring explained
-Cribbage Classic settings and features
-Ultimate Cribbage: Classic reviews and ratings
-Cribbage: The Best Card Game privacy policy
-Cribbage online tournaments and leaderboards
-Cribbage strategy and tactics guide
-Cribbage Classic discard analyzer bonus feature
-Ultimate Cribbage: Classic daily challenge rewards
-Cribbage: The Best Card Game support and feedback
-Cribbage history and origin
-Cribbage board designs and customizations
-Cribbage Classic update and bug fixes
-Ultimate Cribbage: Classic in-app purchases and prices
-Cribbage: The Best Card Game download size and compatibility
-Cribbage offline mode and solo play
-Cribbage glossary and terminology
-Cribbage Classic statistics and performance tracking
-Ultimate Cribbage: Classic app for Mac with Apple M1 chip or later
-Cribbage: The Best Card Game screenshots and videos
-Cribbage etiquette and manners
-Cribbage fun facts and trivia.
To download and play Ultimate Cribbage on your iPhone, follow these simple steps:
-One of the most important aspects of cribbage is discarding and pegging. Discarding is the process of choosing which two cards to put in the crib at the beginning of each deal. Pegging is the process of playing cards during the play phase of each deal. Here are some tips and tricks for discarding and pegging:
-If you want to improve your discarding skills in cribbage , you can use the discard analyzer feature in some cribbage apps. The discard analyzer feature is a tool that can help you decide which cards to discard to the crib based on the expected value of each possible combination. The expected value is the average number of points that you can expect to score from your hand and the crib after the starter card is revealed. The higher the expected value, the better the combination. To use the discard analyzer feature in cribbage apps, follow these simple steps:
The discard analyzer feature is a useful tool that can help you improve your discarding skills in cribbage, but it is not a substitute for your own judgment and intuition. You should also consider other factors, such as your opponent's skill level, your position on the board, and your personal preference, when deciding which cards to discard to the crib.
-Another way to improve your skills and strategy in cribbage is to practice and learn from other players online. Playing online can expose you to different styles and strategies of cribbage, as well as give you feedback and tips on how to play better. Here are some ways to practice and learn from other players online:
-Practicing and learning from other players online can help you improve your skills and strategy in cribbage, but it is not a substitute for your own experience and practice. You should also play offline with real cards and boards, as well as read books and articles about cribbage.
-Cribbage is a classic card game that can provide you with hours of fun and entertainment, as well as improve your mental skills and abilities. It is a game that you can play anytime, anywhere, and with anyone. In this article, we have shown you how to download and play cribbage games on your iPhone, as well as how to improve your skills and strategy in this fascinating game. We hope that you have enjoyed reading this article and that you have learned something new about cribbage. Now go ahead and download one of the cribbage apps we have recommended and start playing this amazing game!
-Here are some frequently asked questions about cribbage:
-The best hand in cribbage is 29 points, which consists of three fives of different suits, a jack of the same suit as the starter card, and a five of the same suit as the starter card. This hand scores 12 points for four 15s (5+5+5+J), 12 points for six pairs (5-5, 5-5, 5-5, 5-J, 5-J, J-J), four points for a flush (all five cards of the same suit), and one point for nobs (the jack of the same suit as the starter card). This hand is very rare and can only occur when the dealer has three fives of different suits in their hand and discards them to their own crib.
-A muggins in cribbage is a rule that allows a player to claim any points that their opponent has missed or miscalculated during the scoring phase of each deal. For example, if a player counts their hand as 10 points, but their opponent notices that they actually have 12 points, the opponent can say "muggins" and take the extra two points for themselves. The muggins rule is optional and can be agreed upon or declined by the players before the game starts.
-A skunk in cribbage is when a player wins the game by 31 or more points over their opponent. A double skunk in cribbage is when a player wins the game by 61 or more points over their opponent. A skunk and a double skunk are considered to be humiliating defeats for the losing player, and they usually result in extra penalties or rewards for the winning player. For example, some players may agree to double or quadruple the stakes of the game if a skunk or a double skunk occurs.
-A cribbage deck consists of a standard 52-card deck, which includes 13 cards of each suit (clubs, diamonds, hearts, and spades) and four ranks of each card (ace, 2, 3, 4, 5, 6, 7, 8, 9, 10, jack, queen, and king). However, some variations of cribbage may use different decks, such as a 48-card deck (without the 10s) or a 32-card deck (without the 2s, 3s, 4s, and 5s).
-To shuffle and cut the cards in cribbage, follow these simple steps:
-Some common terms and phrases used in cribbage are:
-If you are a fan of chess and want to enjoy a more immersive and realistic experience, you might want to try playing 3D chess on your PC. 3D chess is a variation of the classic board game that uses three-dimensional graphics and animations to simulate a real chessboard. You can play against the computer, online opponents, or even friends in local multiplayer mode. In this article, we will show you how to download and install 3D chess game for PC, and review some of the best options available on the market.
-3D chess is a type of chess game that uses three-dimensional models and effects to create a more realistic and engaging gameplay. Unlike traditional chess games that use flat images or icons, 3D chess games allow you to see the pieces from different angles, zoom in and out, rotate the board, and enjoy various visual effects. Some 3D chess games also have different themes, backgrounds, sounds, and music to enhance the atmosphere.
-Download ->>->>->> https://urlin.us/2uSZuw
Playing 3D chess on PC has several advantages over playing it on other devices. First of all, you can enjoy better graphics quality and performance on a larger screen. Second, you can use your mouse and keyboard to control the game more easily and precisely. Third, you can access a wider range of options and features, such as online multiplayer, puzzles, rankings, achievements, etc. Fourth, you can save money by downloading free or cheap games from online platforms.
-There are many ways to download and install 3D chess game for PC, but we will focus on three of the most popular and reliable ones: Chess! on Steam, 3D Chess Game on Microsoft Store, and 3D Chess on Steam. We will compare their features, pros and cons, and how to get them in the following sections.
-Chess! is an upcoming 3D chess game that is expected to be released in Q2 2023. It is developed by Exeter Game Studios and published by familyplay. It is built with Unreal Engine 5 and integrated with Lichess, one of the largest online chess platforms in the world. Here are some of its features:
-Relaxing scenes and sounds: Choose from a variety of relaxing scenes and sounds to create the perfect ambiance for your chess game. Whether you prefer a cozy fireplace, a tranquil garden, or a futuristic cityscape, you can find the ideal setting for your mood and style.
The pros of Chess! are:
-The cons of Chess! are:
-To get Chess!, you need to have a Steam account and a PC that meets the minimum system requirements. You can pre-order the game on Steam for $9.99 and get access to it as soon as it is released. You can also follow the game's development updates on its official website or social media accounts.
-How to install 3D Chess Game on Windows PC or Mac[^1^]
-Chess! an immersive 3D chess game with Lichess integration[^2^]
-Get 3D Chess Game from Microsoft Store for free[^3^]
-3D Chess a unique chess trip with instant duels on Steam[^4^]
-Best 3D chess games for PC in 2023
-Download 3D Chess Game APK for Android devices
-Play 3D Chess online with friends or strangers
-Learn chess with 3D Chess Game puzzles and challenges
-Compare 3D Chess Game with other chess apps and software
-3D Chess Game reviews and ratings from users and experts
-How to uninstall 3D Chess Game from your PC or Mac
-3D Chess Game tips and tricks to improve your skills
-How to customize your board and pieces in 3D Chess Game
-How to play 3D Chess Game offline or without internet connection
-How to solve common issues and errors in 3D Chess Game
-How to update 3D Chess Game to the latest version
-How to use the free flying camera in 3D Chess Game
-How to play against advanced AI in 3D Chess Game
-How to participate in ranked matchmaking in 3D Chess Game
-How to track your online ELO in 3D Chess Game
-How to sign up and login to Lichess account in 3D Chess Game
-How to donate to Lichess charity organization in 3D Chess Game
-How to enjoy lifelike textures and realistic lighting in 3D Chess Game
-How to switch between different scenes and locations in 3D Chess Game
-How to play 3D Chess Game on HoloLens or other VR devices
-How to download and play 3D Chess Game on Linux or Ubuntu
-How to stream or record your gameplay of 3D Chess Game
-How to join or create a chess club in 3D Chess Game
-How to chat or communicate with other players in 3D Chess Game
-How to report or block abusive or cheating players in 3D Chess Game
-How to access the vast library of offline puzzles in 3D Chess Game
-How to share your achievements and scores of 3D Chess Game on social media
-How to find and play with your friends in 3D Chess Game
-How to change the language or sound settings in 3D Chess Game
-How to enable or disable the relaxing music in 3D Chess Game
-How to use wildcard characters or anagrams in 3D Chess Game
-How to checkmate your opponent with only a few moves in 3D Chess Game
-How to learn from the best with expertly curated chess challenges in 3D Chess Game
-How to play different variants of chess such as blitz, bullet, rapid, etc. in 3D Chess Game
-How to watch live games or tournaments of professional chess players in 3D Chess Game
-How to use the dictionary feature in 3D Chess Game for definitions and synonyms of chess terms
-How to play the piano or other musical instruments in 3D Chess Game for fun or relaxation
-How to use the Phoenix Force feature in 3D Chess Game for a fiery and explosive gameplay
-How to climb up and overcome increasing challenges in Upward mode of 3D Chess Game
-How to use the speech function in 3D Chess Game for correct pronunciation of chess moves and names
-How to see your word history or make your own list of favorite words in Dictionary mode of 3D Chess Game
-How to get the word of the day with interesting and entertaining words in Dictionary mode of 3D Chess Game
-How to use the solar physics feature in 3D Chess Game for learning about the Sun and its layers
-How to witness the power of Unreal Engine 5 in transforming the classic game of chess into a breathtaking visual spectacle
3D Chess Game is a free 3D chess game that is available on Microsoft Store. It is developed by A Trillion Games Ltd and has over 10 million downloads. It is designed for Windows 10 devices, including PCs, tablets, and phones. Here are some of its features:
-The pros of 3D Chess Game are:
-The cons of 3D Chess Game are:
-To get 3D Chess Game, you need to have a Microsoft account and a Windows 10 device that meets the minimum system requirements. You can download the game from Microsoft Store for free and start playing it right away. You can also rate and review the game on the store page or contact the developer for feedback or support.
-3D Chess is another 3D chess game that is available on Steam. It is developed by Bumblebee Games Studio Ltd. It was released in 2016 and has over 1000 reviews. It is designed for Windows PCs only. Here are some of its features:
-Cinematic camera: Enjoy a cinematic camera that follows the action on the board. You can also adjust the camera angle, zoom, and rotation to get the best view of the game.
The pros of 3D Chess are:
-The cons of 3D Chess are:
-To get 3D Chess, you need to have a Steam account and a Windows PC that meets the minimum system requirements. You can buy the game on Steam for $4.99 and download it to your PC. You can also check out the game's trailer, screenshots, and reviews on its Steam page or official website.
-To help you decide which 3D chess game for PC is best for you, we have created a comparison table that summarizes the main features, pros and cons, and prices of the three options we have reviewed. You can see the table below:
- | Feature | Chess! | 3D Chess Game | 3D Chess | | --- | --- | --- | --- | | Graphics quality | Excellent | Mediocre | Good | | Online platform | Lichess | None | Steam | | Game modes | AI, online, puzzles | AI, online, local | AI, online, local | | Customization options | Scenes, sounds | Board, pieces, background | Board, pieces | | Statistics and achievements | Yes | Yes | Yes | | Price | $9.99 (pre-order) | Free | $4.99 | | Pros | Stunning graphics; Lichess integration; wide range of difficulty levels and puzzles; relaxing scenes and sounds | Free; compatible with Windows 10 devices; simple and intuitive interface; multiple game modes and customizable options | Detailed graphics and realistic shadows; cinematic camera; single-player and multiplayer modes; Steam features | | Cons | Not yet released; might require high-end PC; might not be compatible with older systems or devices | Mediocre graphics; limited online features; occasional bugs or errors | Not free; only compatible with Windows PCs; some negative reviews |In this article, we have shown you how to download and install 3D chess game for PC, and reviewed some of the best options available on the market. We have compared their features, pros and cons, and prices in a comparison table. We have also explained what 3D chess is and why playing it on PC has several advantages over playing it on other devices.
-Based on our analysis, we recommend Chess! as the best option for downloading 3D chess game for PC. It offers stunning graphics, Lichess integration, wide range of difficulty levels and puzzles, relaxing scenes and sounds, and more. It is also reasonably priced at $9.99 for pre-ordering. However, if you are looking for a free or simpler option, you can also try 3D Chess Game or 3D Chess on Microsoft Store or Steam respectively.
-If you are interested in playing 3D chess on your PC, you can follow the links below to get your preferred option: - [Chess! on Steam] - [3D Chess Game on Microsoft Store] - [3D Chess on Steam] We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-Here are some frequently asked questions about downloading 3D chess game
Here are some frequently asked questions about downloading 3D chess game for PC:
-Playing 3D chess on PC has several benefits, such as better graphics quality and performance, easier and more precise control, wider range of options and features, and saving money by downloading free or cheap games.
-3D chess is a variation of the classic board game that uses three-dimensional graphics and animations to simulate a real chessboard. It allows you to see the pieces from different angles, zoom in and out, rotate the board, and enjoy various visual effects. Some 3D chess games also have different themes, backgrounds, sounds, and music to enhance the atmosphere.
-The minimum system requirements for playing 3D chess on PC vary depending on the game you choose. However, a general guideline is that you need a Windows PC with at least 4 GB of RAM, 2 GB of disk space, a dual-core processor, and a graphics card that supports DirectX 11 or higher.
-You can improve your skills in 3D chess by practicing regularly, playing against different opponents, solving puzzles, learning from tutorials or guides, watching videos or streams of other players, and joining online communities or forums.
-You can find more information or support about 3D chess games by visiting their official websites or social media accounts, reading their reviews or ratings on online platforms, contacting their developers or publishers, or asking other players or experts.
-
-
-
-
- How to Download Music from YouTube with 9xbuddy
-Do you love listening to music on YouTube but wish you could save it offline? Do you want to enjoy your favorite songs without ads or interruptions? Do you want to convert YouTube videos into MP3 files easily and quickly?
-If you answered yes to any of these questions, then you need to try 9xbuddy. It is a powerful online tool that lets you download music from YouTube in a matter of seconds. In this article, we will show you how to use 9xbuddy to download music from YouTube, as well as some tips and tricks for making the most of it. We will also compare it with some alternatives that you can try if you want more options.
-9xbuddy music download
DOWNLOAD ↔ https://jinyurl.com/2uNNUt
What is 9xbuddy?
-9xbuddy is a free online service that allows you to download any video or audio from any website, including YouTube, Facebook, Instagram, Twitter, Vimeo, Dailymotion, SoundCloud, and more. You can use it to download music from YouTube in MP3 format, as well as other formats like MP4, WEBM, M4A, and more. You can also choose the quality of the download, from low to high. 9xbuddy is fast, easy, and reliable. You don't need to install any software or register an account. You just need to copy and paste the URL of the video or audio you want to download and click on the download button. 9xbuddy will do the rest for you.
- Why Use 9xbuddy to Download Music from YouTube?
-There are many reasons why you might want to use 9xbuddy to download music from YouTube. Here are some of them:
-
-- You can save your favorite songs offline and listen to them anytime, anywhere, without internet connection or data charges.
-- You can avoid annoying ads or interruptions that might ruin your listening experience.
-- You can create your own playlists and mixtapes with the songs you download.
-- You can transfer the songs to other devices or platforms, such as your phone, tablet, computer, MP3 player, car stereo, etc.
-- You can edit or remix the songs with other tools or software.
-- You can share the songs with your friends or family via email, social media, Bluetooth, etc.
-
-As you can see, using 9xbuddy to download music from YouTube can give you a lot of benefits and convenience. It can also save you time and money. So why not give it a try?
- How to Use 9xbuddy to Download Music from YouTube?
-Using 9xbuddy to download music from YouTube is very simple and straightforward. You just need to follow these four steps:
-9xbuddy online video downloader
-9xbuddy mp3 converter
-9xbuddy youtube to mp3
-9xbuddy soundcloud downloader
-9xbuddy alternative sites
-9xbuddy facebook video download
-9xbuddy twitter video download
-9xbuddy dailymotion video download
-9xbuddy instagram video download
-9xbuddy tiktok video download
-9xbuddy vimeo video download
-9xbuddy spotify music download
-9xbuddy apple music download
-9xbuddy amazon music download
-9xbuddy deezer music download
-9xbuddy tidal music download
-9xbuddy pandora music download
-9xbuddy audiomack music download
-9xbuddy bandcamp music download
-9xbuddy soundclick music download
-9xbuddy mixcloud music download
-9xbuddy reverbnation music download
-9xbuddy datpiff music download
-9xbuddy jamendo music download
-9xbuddy beatport music download
-9xbuddy jiosaavn music download
-9xbuddy gaana music download
-9xbuddy hungama music download
-9xbuddy wynk music download
-9xbuddy shazam music download
-9xbuddy musixmatch music download
-9xbuddy tunein music download
-9xbuddy iheartradio music download
-9xbuddy last.fm music download
-9xbuddy napster music download
-9xbuddy yandex.music download
-9xbuddy qqmusic download
-9xbuddy netease cloud music download
-9xbuddy xiami music download
-9xbuddy kuwo music download
-9xbuddy kugou music download
-9xbuddy migu music download
-9xbuddy melon music download
-9xbuddy bugs music download
-9xbuddy genie music download
-9xbuddy flo music download
-9xbuddy vibe music download
- Step 1: Copy the YouTube Video URL
-The first thing you need to do is to find the YouTube video that contains the music you want to download. You can use the YouTube app or website to search for it. Once you find it, you need to copy its URL. The URL is the web address that appears in the address bar of your browser or app. For example, the URL of this video is https://www.youtube.com/watch?v=kJQP7kiw5Fk. To copy it, you can either right-click on it and select "Copy" or highlight it and press Ctrl+C on your keyboard (or Command+C on Mac).
- Step 2: Paste the URL into 9xbuddy
-The next thing you need to do is to go to the 9xbuddy website: https://9xbuddy.org/. You will see a search box where you can paste the URL of the YouTube video. To paste it, you can either right-click on the box and select "Paste" or click on the box and press Ctrl+V on your keyboard (or Command+V on Mac). Then, click on the "Download" button next to the box.
- Step 3: Choose the MP3 Format and Quality
-After clicking on the "Download" button, 9xbuddy will analyze the URL and show you a list of available formats and qualities for downloading. You will see options like MP4 (video), WEBM (video), M4A (audio), MP3 (audio), etc. To download music from YouTube, you need to choose the MP3 format. You can also choose the quality of the MP3 file, from low (64 kbps) to high (320 kbps). The higher the quality, the larger the file size and the better the sound quality. To choose the format and quality, just click on them.
- Step 4: Download the MP3 File
-The final step is to download the MP3 file to your device or cloud storage. You will see a green "Download Now" button next to the format and quality you chose. Click on it and a new tab will open with a countdown timer. Wait for a few seconds until the timer reaches zero and then click on the "Download" button that appears. The MP3 file will start downloading automatically. You can check the progress of the download in your browser or app. Once the download is complete, you can find the MP3 file in your default download folder or location. You can also rename or move it as you wish.
- Tips Tips and Tricks for Using 9xbuddy
-
To make your experience with 9xbuddy even better, here are some tips and tricks that you can use:
- Tip 1: Use the Bookmarklet or Extension
-If you want to download music from YouTube faster and easier, you can use the bookmarklet or extension that 9xbuddy offers. The bookmarklet is a small piece of code that you can drag and drop to your browser's bookmarks bar. The extension is a small program that you can install to your browser. Both of them allow you to download music from YouTube with just one click, without having to copy and paste the URL or go to the 9xbuddy website. To use the bookmarklet or extension, you need to go to this page: https://9xbuddy.org/tools and follow the instructions there.
- Tip 2: Use the Batch Download Feature
-If you want to download multiple music files at once, you can use the batch download feature that 9xbuddy offers. This feature allows you to enter multiple URLs in one search box and download them all in one go. To use the batch download feature, you need to go to this page: https://9xbuddy.org/batch and follow the instructions there.
- Tip 3: Use the Playlist Download Feature
-If you want to download an entire playlist from YouTube, you can use the playlist download feature that 9xbuddy offers. This feature allows you to enter the URL of a YouTube playlist and download all the videos or audios in it in one go. To use the playlist download feature, you need to go to this page: https://9xbuddy.org/playlist and follow the instructions there.
- Alternatives to 9xbuddy
-Although 9xbuddy is a great tool for downloading music from YouTube, it is not the only one. There are some other websites or tools that can also do the same job. Here are some of them:
- Alternative 1: YTMP3
-YTMP3 is a simple and fast online service that allows you to convert and download YouTube videos into MP3 or MP4 files. You can use it to download music from YouTube in high quality (up to 320 kbps) and without any limitations. You don't need to install any software or register an account. You just need to copy and paste the URL of the YouTube video and click on the convert button. YTMP3 will do the rest for you. You can access YTMP3 here: https://ytmp3.cc/.
- Alternative 2: Snappea
-Snappea is a versatile and powerful online tool that allows you to download videos and audios from various websites, including YouTube, Facebook, Instagram, TikTok, Dailymotion, etc. You can use it to download music from YouTube in various formats (MP3, MP4, M4A, etc.) and qualities (from 144p to 1080p). You don't need to install any software or register an account. You just need to copy and paste the URL of the video or audio and click on the download button. Snappea will do the rest for you. You can access Snappea here: https://www.snappea.com/.
- Alternative 3: MP3FY
-MP3FY is a fast and easy online service that allows you to convert and download any video or audio from any website into MP3 files. You can use it to download music from YouTube in high quality (up to 320 kbps) and without any restrictions. You don't need to install any software or register an account. You just need to copy and paste the URL of the video or audio and click on the convert button. MP3FY will do the rest for you. You can access MP3FY here: https://mp3fy.com/.
- Conclusion
-In conclusion, downloading music from YouTube with 9xbuddy is a simple and convenient way to enjoy your favorite songs offline. You just need to follow four easy steps: copy the URL of the YouTube video, paste it into 9xbuddy, choose the MP3 format and quality, and download the file. You can also use some tips and tricks to enhance your experience with 9xbuddy, such as using the bookmarklet or extension, using the batch download feature, or using the playlist download feature. If you want more options, you can also try some alternatives to 9xbuddy, such as YTMP3, Snappea, or MP3FY.
-We hope this
We hope this article has helped you learn how to download music from YouTube with 9xbuddy. Now you can enjoy your favorite songs anytime, anywhere, without any hassle. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.
- FAQs
-Here are some frequently asked questions and answers about 9xbuddy and downloading music from YouTube:
- Q: Is 9xbuddy safe and legal?
-A: 9xbuddy is safe and legal to use, as long as you use it for personal and non-commercial purposes. 9xbuddy does not host or store any content on its servers. It only acts as a mediator between the user and the source website. However, you should always respect the intellectual property rights of the original creators and owners of the content. You should not download or distribute any content that is protected by copyright or other laws.
- Q: How long does it take to download music from YouTube with 9xbuddy?
-A: The time it takes to download music from YouTube with 9xbuddy depends on several factors, such as the length and quality of the video, the speed of your internet connection, and the traffic on the website. Generally, it takes a few seconds to a few minutes to download a music file from YouTube with 9xbuddy.
- Q: How many music files can I download from YouTube with 9xbuddy?
-A: There is no limit to how many music files you can download from YouTube with 9xbuddy. You can download as many as you want, as long as you have enough space on your device or cloud storage. However, you should be mindful of the bandwidth and data usage that downloading music files can consume.
- Q: Can I download music from other websites besides YouTube with 9xbuddy?
-A: Yes, you can download music from other websites besides YouTube with 9xbuddy. 9xbuddy supports over 1000 websites, including Facebook, Instagram, Twitter, Vimeo, Dailymotion, SoundCloud, and more. You can use the same steps as downloading music from YouTube with 9xbuddy.
- Q: Can I download music from YouTube with 9xbuddy on my mobile device?
-A: Yes, you can download music from YouTube with 9xbuddy on your mobile device. 9xbuddy is compatible with all devices and platforms, including Android, iOS, Windows, Mac, Linux, etc. You can use any browser or app that supports web browsing to access 9xbuddy and download music from YouTube.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/232labs/VToonify/vtoonify/model/dualstylegan.py b/spaces/232labs/VToonify/vtoonify/model/dualstylegan.py
deleted file mode 100644
index 60d9850ad049a2751781871d6ae0c2779ecc863f..0000000000000000000000000000000000000000
--- a/spaces/232labs/VToonify/vtoonify/model/dualstylegan.py
+++ /dev/null
@@ -1,203 +0,0 @@
-import random
-import torch
-from torch import nn
-from model.stylegan.model import ConvLayer, PixelNorm, EqualLinear, Generator
-
-class AdaptiveInstanceNorm(nn.Module):
- def __init__(self, fin, style_dim=512):
- super().__init__()
-
- self.norm = nn.InstanceNorm2d(fin, affine=False)
- self.style = nn.Linear(style_dim, fin * 2)
-
- self.style.bias.data[:fin] = 1
- self.style.bias.data[fin:] = 0
-
- def forward(self, input, style):
- style = self.style(style).unsqueeze(2).unsqueeze(3)
- gamma, beta = style.chunk(2, 1)
- out = self.norm(input)
- out = gamma * out + beta
- return out
-
-# modulative residual blocks (ModRes)
-class AdaResBlock(nn.Module):
- def __init__(self, fin, style_dim=512, dilation=1): # modified
- super().__init__()
-
- self.conv = ConvLayer(fin, fin, 3, dilation=dilation) # modified
- self.conv2 = ConvLayer(fin, fin, 3, dilation=dilation) # modified
- self.norm = AdaptiveInstanceNorm(fin, style_dim)
- self.norm2 = AdaptiveInstanceNorm(fin, style_dim)
-
- # model initialization
- # the convolution filters are set to values close to 0 to produce negligible residual features
- self.conv[0].weight.data *= 0.01
- self.conv2[0].weight.data *= 0.01
-
- def forward(self, x, s, w=1):
- skip = x
- if w == 0:
- return skip
- out = self.conv(self.norm(x, s))
- out = self.conv2(self.norm2(out, s))
- out = out * w + skip
- return out
-
-class DualStyleGAN(nn.Module):
- def __init__(self, size, style_dim, n_mlp, channel_multiplier=2, twoRes=True, res_index=6):
- super().__init__()
-
- layers = [PixelNorm()]
- for i in range(n_mlp-6):
- layers.append(EqualLinear(512, 512, lr_mul=0.01, activation="fused_lrelu"))
- # color transform blocks T_c
- self.style = nn.Sequential(*layers)
- # StyleGAN2
- self.generator = Generator(size, style_dim, n_mlp, channel_multiplier)
- # The extrinsic style path
- self.res = nn.ModuleList()
- self.res_index = res_index//2 * 2
- self.res.append(AdaResBlock(self.generator.channels[2 ** 2])) # for conv1
- for i in range(3, self.generator.log_size + 1):
- out_channel = self.generator.channels[2 ** i]
- if i < 3 + self.res_index//2:
- # ModRes
- self.res.append(AdaResBlock(out_channel))
- self.res.append(AdaResBlock(out_channel))
- else:
- # structure transform block T_s
- self.res.append(EqualLinear(512, 512))
- # FC layer is initialized with identity matrices, meaning no changes to the input latent code
- self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01
- self.res.append(EqualLinear(512, 512))
- self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01
- self.res.append(EqualLinear(512, 512)) # for to_rgb7
- self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01
- self.size = self.generator.size
- self.style_dim = self.generator.style_dim
- self.log_size = self.generator.log_size
- self.num_layers = self.generator.num_layers
- self.n_latent = self.generator.n_latent
- self.channels = self.generator.channels
-
- def forward(
- self,
- styles, # intrinsic style code
- exstyles, # extrinsic style code
- return_latents=False,
- return_feat=False,
- inject_index=None,
- truncation=1,
- truncation_latent=None,
- input_is_latent=False,
- noise=None,
- randomize_noise=True,
- z_plus_latent=False, # intrinsic style code is z+ or z
- use_res=True, # whether to use the extrinsic style path
- fuse_index=18, # layers > fuse_index do not use the extrinsic style path
- interp_weights=[1]*18, # weight vector for style combination of two paths
- ):
-
- if not input_is_latent:
- if not z_plus_latent:
- styles = [self.generator.style(s) for s in styles]
- else:
- styles = [self.generator.style(s.reshape(s.shape[0]*s.shape[1], s.shape[2])).reshape(s.shape) for s in styles]
-
- if noise is None:
- if randomize_noise:
- noise = [None] * self.generator.num_layers
- else:
- noise = [
- getattr(self.generator.noises, f"noise_{i}") for i in range(self.generator.num_layers)
- ]
-
- if truncation < 1:
- style_t = []
-
- for style in styles:
- style_t.append(
- truncation_latent + truncation * (style - truncation_latent)
- )
-
- styles = style_t
-
- if len(styles) < 2:
- inject_index = self.generator.n_latent
-
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
-
- else:
- latent = styles[0]
-
- else:
- if inject_index is None:
- inject_index = random.randint(1, self.generator.n_latent - 1)
-
- if styles[0].ndim < 3:
- latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
- latent2 = styles[1].unsqueeze(1).repeat(1, self.generator.n_latent - inject_index, 1)
-
- latent = torch.cat([latent, latent2], 1)
- else:
- latent = torch.cat([styles[0][:,0:inject_index], styles[1][:,inject_index:]], 1)
-
- if use_res:
- if exstyles.ndim < 3:
- resstyles = self.style(exstyles).unsqueeze(1).repeat(1, self.generator.n_latent, 1)
- adastyles = exstyles.unsqueeze(1).repeat(1, self.generator.n_latent, 1)
- else:
- nB, nL, nD = exstyles.shape
- resstyles = self.style(exstyles.reshape(nB*nL, nD)).reshape(nB, nL, nD)
- adastyles = exstyles
-
- out = self.generator.input(latent)
- out = self.generator.conv1(out, latent[:, 0], noise=noise[0])
- if use_res and fuse_index > 0:
- out = self.res[0](out, resstyles[:, 0], interp_weights[0])
-
- skip = self.generator.to_rgb1(out, latent[:, 1])
- i = 1
- for conv1, conv2, noise1, noise2, to_rgb in zip(
- self.generator.convs[::2], self.generator.convs[1::2], noise[1::2], noise[2::2], self.generator.to_rgbs):
- if use_res and fuse_index >= i and i > self.res_index:
- out = conv1(out, interp_weights[i] * self.res[i](adastyles[:, i]) +
- (1-interp_weights[i]) * latent[:, i], noise=noise1)
- else:
- out = conv1(out, latent[:, i], noise=noise1)
- if use_res and fuse_index >= i and i <= self.res_index:
- out = self.res[i](out, resstyles[:, i], interp_weights[i])
- if use_res and fuse_index >= (i+1) and i > self.res_index:
- out = conv2(out, interp_weights[i+1] * self.res[i+1](adastyles[:, i+1]) +
- (1-interp_weights[i+1]) * latent[:, i+1], noise=noise2)
- else:
- out = conv2(out, latent[:, i + 1], noise=noise2)
- if use_res and fuse_index >= (i+1) and i <= self.res_index:
- out = self.res[i+1](out, resstyles[:, i+1], interp_weights[i+1])
- if use_res and fuse_index >= (i+2) and i >= self.res_index-1:
- skip = to_rgb(out, interp_weights[i+2] * self.res[i+2](adastyles[:, i+2]) +
- (1-interp_weights[i+2]) * latent[:, i + 2], skip)
- else:
- skip = to_rgb(out, latent[:, i + 2], skip)
- i += 2
- if i > self.res_index and return_feat:
- return out, skip
-
- image = skip
-
- if return_latents:
- return image, latent
-
- else:
- return image, None
-
- def make_noise(self):
- return self.generator.make_noise()
-
- def mean_latent(self, n_latent):
- return self.generator.mean_latent(n_latent)
-
- def get_latent(self, input):
- return self.generator.style(input)
\ No newline at end of file
diff --git a/spaces/232labs/VToonify/vtoonify/model/raft/core/__init__.py b/spaces/232labs/VToonify/vtoonify/model/raft/core/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ANDRYHA/FakeNewsClassifier/app.py b/spaces/ANDRYHA/FakeNewsClassifier/app.py
deleted file mode 100644
index 9d993f78b38fba1fa0c1c44aae78746972aa4e65..0000000000000000000000000000000000000000
--- a/spaces/ANDRYHA/FakeNewsClassifier/app.py
+++ /dev/null
@@ -1,71 +0,0 @@
-from transformers import FSMTForConditionalGeneration, FSMTTokenizer
-from transformers import AutoModelForSequenceClassification
-from transformers import AutoTokenizer
-from langdetect import detect
-from newspaper import Article
-from PIL import Image
-import streamlit as st
-import requests
-import torch
-
-st.markdown("## Prediction of Fakeness by Given URL")
-background = Image.open('logo.jpg')
-st.image(background)
-
-st.markdown(f"### Article URL")
-text = st.text_area("Insert some url here",
- value="https://en.globes.co.il/en/article-yandex-looks-to-expand-activities-in-israel-1001406519")
-
-@st.cache(allow_output_mutation=True)
-def get_models_and_tokenizers():
- model_name = 'distilbert-base-uncased-finetuned-sst-2-english'
- model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)
- model.eval()
- tokenizer = AutoTokenizer.from_pretrained(model_name)
- model.load_state_dict(torch.load('./model.pth', map_location='cpu'))
-
- model_name_translator = "facebook/wmt19-ru-en"
- tokenizer_translator = FSMTTokenizer.from_pretrained(model_name_translator)
- model_translator = FSMTForConditionalGeneration.from_pretrained(model_name_translator)
- model_translator.eval()
- return model, tokenizer, model_translator, tokenizer_translator
-
-model, tokenizer, model_translator, tokenizer_translator = get_models_and_tokenizers()
-
-article = Article(text)
-article.download()
-article.parse()
-concated_text = article.title + '. ' + article.text
-lang = detect(concated_text)
-
-st.markdown(f"### Language detection")
-
-if lang == 'ru':
- st.markdown(f"The language of this article is {lang.upper()} so we translated it!")
- with st.spinner('Waiting for translation'):
- input_ids = tokenizer_translator.encode(concated_text,
- return_tensors="pt", max_length=512, truncation=True)
- outputs = model_translator.generate(input_ids)
- decoded = tokenizer_translator.decode(outputs[0], skip_special_tokens=True)
- st.markdown("### Translated Text")
- st.markdown(f"{decoded[:777]}")
- concated_text = decoded
-else:
- st.markdown(f"The language of this article for sure: {lang.upper()}!")
-
- st.markdown("### Extracted Text")
- st.markdown(f"{concated_text[:777]}")
-
-tokens_info = tokenizer(concated_text, truncation=True, return_tensors="pt")
-with torch.no_grad():
- raw_predictions = model(**tokens_info)
-softmaxed = int(torch.nn.functional.softmax(raw_predictions.logits[0], dim=0)[1] * 100)
-st.markdown("### Fakeness Prediction")
-st.progress(softmaxed)
-st.markdown(f"This is fake by **{softmaxed}%**!")
-if (softmaxed > 70):
- st.error('We would not trust this text!')
-elif (softmaxed > 40):
- st.warning('We are not sure about this text!')
-else:
- st.success('We would trust this text!')
\ No newline at end of file
diff --git a/spaces/Abhilashvj/planogram-compliance/models/common.py b/spaces/Abhilashvj/planogram-compliance/models/common.py
deleted file mode 100644
index 5b9ca2d051f8f2c9317dcfda3f989d52d232719a..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/models/common.py
+++ /dev/null
@@ -1,1268 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Common modules
-"""
-
-import ast
-import contextlib
-import json
-import math
-import platform
-import warnings
-import zipfile
-from collections import OrderedDict, namedtuple
-from copy import copy
-from pathlib import Path
-from urllib.parse import urlparse
-
-import cv2
-import numpy as np
-import pandas as pd
-import requests
-import torch
-import torch.nn as nn
-from IPython.display import display
-from PIL import Image
-from torch.cuda import amp
-
-from utils import TryExcept
-from utils.dataloaders import exif_transpose, letterbox
-from utils.general import (
- LOGGER,
- ROOT,
- Profile,
- check_requirements,
- check_suffix,
- check_version,
- colorstr,
- increment_path,
- is_notebook,
- make_divisible,
- non_max_suppression,
- scale_boxes,
- xywh2xyxy,
- xyxy2xywh,
- yaml_load,
-)
-from utils.plots import Annotator, colors, save_one_box
-from utils.torch_utils import copy_attr, smart_inference_mode
-
-
-def autopad(k, p=None, d=1): # kernel, padding, dilation
- # Pad to 'same' shape outputs
- if d > 1:
- k = (
- d * (k - 1) + 1
- if isinstance(k, int)
- else [d * (x - 1) + 1 for x in k]
- ) # actual kernel-size
- if p is None:
- p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
- return p
-
-
-class Conv(nn.Module):
- # Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)
- default_act = nn.SiLU() # default activation
-
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
- super().__init__()
- self.conv = nn.Conv2d(
- c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False
- )
- self.bn = nn.BatchNorm2d(c2)
- self.act = (
- self.default_act
- if act is True
- else act
- if isinstance(act, nn.Module)
- else nn.Identity()
- )
-
- def forward(self, x):
- return self.act(self.bn(self.conv(x)))
-
- def forward_fuse(self, x):
- return self.act(self.conv(x))
-
-
-class DWConv(Conv):
- # Depth-wise convolution
- def __init__(
- self, c1, c2, k=1, s=1, d=1, act=True
- ): # ch_in, ch_out, kernel, stride, dilation, activation
- super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), d=d, act=act)
-
-
-class DWConvTranspose2d(nn.ConvTranspose2d):
- # Depth-wise transpose convolution
- def __init__(
- self, c1, c2, k=1, s=1, p1=0, p2=0
- ): # ch_in, ch_out, kernel, stride, padding, padding_out
- super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2))
-
-
-class TransformerLayer(nn.Module):
- # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance)
- def __init__(self, c, num_heads):
- super().__init__()
- self.q = nn.Linear(c, c, bias=False)
- self.k = nn.Linear(c, c, bias=False)
- self.v = nn.Linear(c, c, bias=False)
- self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)
- self.fc1 = nn.Linear(c, c, bias=False)
- self.fc2 = nn.Linear(c, c, bias=False)
-
- def forward(self, x):
- x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x
- x = self.fc2(self.fc1(x)) + x
- return x
-
-
-class TransformerBlock(nn.Module):
- # Vision Transformer https://arxiv.org/abs/2010.11929
- def __init__(self, c1, c2, num_heads, num_layers):
- super().__init__()
- self.conv = None
- if c1 != c2:
- self.conv = Conv(c1, c2)
- self.linear = nn.Linear(c2, c2) # learnable position embedding
- self.tr = nn.Sequential(
- *(TransformerLayer(c2, num_heads) for _ in range(num_layers))
- )
- self.c2 = c2
-
- def forward(self, x):
- if self.conv is not None:
- x = self.conv(x)
- b, _, w, h = x.shape
- p = x.flatten(2).permute(2, 0, 1)
- return (
- self.tr(p + self.linear(p))
- .permute(1, 2, 0)
- .reshape(b, self.c2, w, h)
- )
-
-
-class Bottleneck(nn.Module):
- # Standard bottleneck
- def __init__(
- self, c1, c2, shortcut=True, g=1, e=0.5
- ): # ch_in, ch_out, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_, c2, 3, 1, g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class BottleneckCSP(nn.Module):
- # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- def __init__(
- self, c1, c2, n=1, shortcut=True, g=1, e=0.5
- ): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
- self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
- self.cv4 = Conv(2 * c_, c2, 1, 1)
- self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3)
- self.act = nn.SiLU()
- self.m = nn.Sequential(
- *(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))
- )
-
- def forward(self, x):
- y1 = self.cv3(self.m(self.cv1(x)))
- y2 = self.cv2(x)
- return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1))))
-
-
-class CrossConv(nn.Module):
- # Cross Convolution Downsample
- def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
- # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, (1, k), (1, s))
- self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
- self.add = shortcut and c1 == c2
-
- def forward(self, x):
- return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
-
-
-class C3(nn.Module):
- # CSP Bottleneck with 3 convolutions
- def __init__(
- self, c1, c2, n=1, shortcut=True, g=1, e=0.5
- ): # ch_in, ch_out, number, shortcut, groups, expansion
- super().__init__()
- c_ = int(c2 * e) # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c1, c_, 1, 1)
- self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2)
- self.m = nn.Sequential(
- *(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n))
- )
-
- def forward(self, x):
- return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))
-
-
-class C3x(C3):
- # C3 module with cross-convolutions
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e)
- self.m = nn.Sequential(
- *(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n))
- )
-
-
-class C3TR(C3):
- # C3 module with TransformerBlock()
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e)
- self.m = TransformerBlock(c_, c_, 4, n)
-
-
-class C3SPP(C3):
- # C3 module with SPP()
- def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e)
- self.m = SPP(c_, c_, k)
-
-
-class C3Ghost(C3):
- # C3 module with GhostBottleneck()
- def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
- super().__init__(c1, c2, n, shortcut, g, e)
- c_ = int(c2 * e) # hidden channels
- self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n)))
-
-
-class SPP(nn.Module):
- # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729
- def __init__(self, c1, c2, k=(5, 9, 13)):
- super().__init__()
- c_ = c1 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
- self.m = nn.ModuleList(
- [nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]
- )
-
- def forward(self, x):
- x = self.cv1(x)
- with warnings.catch_warnings():
- warnings.simplefilter(
- "ignore"
- ) # suppress torch 1.9.0 max_pool2d() warning
- return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
-
-
-class SPPF(nn.Module):
- # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
- def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))
- super().__init__()
- c_ = c1 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, 1, 1)
- self.cv2 = Conv(c_ * 4, c2, 1, 1)
- self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
-
- def forward(self, x):
- x = self.cv1(x)
- with warnings.catch_warnings():
- warnings.simplefilter(
- "ignore"
- ) # suppress torch 1.9.0 max_pool2d() warning
- y1 = self.m(x)
- y2 = self.m(y1)
- return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1))
-
-
-class Focus(nn.Module):
- # Focus wh information into c-space
- def __init__(
- self, c1, c2, k=1, s=1, p=None, g=1, act=True
- ): # ch_in, ch_out, kernel, stride, padding, groups
- super().__init__()
- self.conv = Conv(c1 * 4, c2, k, s, p, g, act=act)
- # self.contract = Contract(gain=2)
-
- def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
- return self.conv(
- torch.cat(
- (
- x[..., ::2, ::2],
- x[..., 1::2, ::2],
- x[..., ::2, 1::2],
- x[..., 1::2, 1::2],
- ),
- 1,
- )
- )
- # return self.conv(self.contract(x))
-
-
-class GhostConv(nn.Module):
- # Ghost Convolution https://github.com/huawei-noah/ghostnet
- def __init__(
- self, c1, c2, k=1, s=1, g=1, act=True
- ): # ch_in, ch_out, kernel, stride, groups
- super().__init__()
- c_ = c2 // 2 # hidden channels
- self.cv1 = Conv(c1, c_, k, s, None, g, act=act)
- self.cv2 = Conv(c_, c_, 5, 1, None, c_, act=act)
-
- def forward(self, x):
- y = self.cv1(x)
- return torch.cat((y, self.cv2(y)), 1)
-
-
-class GhostBottleneck(nn.Module):
- # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
- def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
- super().__init__()
- c_ = c2 // 2
- self.conv = nn.Sequential(
- GhostConv(c1, c_, 1, 1), # pw
- DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
- GhostConv(c_, c2, 1, 1, act=False),
- ) # pw-linear
- self.shortcut = (
- nn.Sequential(
- DWConv(c1, c1, k, s, act=False), Conv(c1, c2, 1, 1, act=False)
- )
- if s == 2
- else nn.Identity()
- )
-
- def forward(self, x):
- return self.conv(x) + self.shortcut(x)
-
-
-class Contract(nn.Module):
- # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)
- def __init__(self, gain=2):
- super().__init__()
- self.gain = gain
-
- def forward(self, x):
- (
- b,
- c,
- h,
- w,
- ) = (
- x.size()
- ) # assert (h / s == 0) and (W / s == 0), 'Indivisible gain'
- s = self.gain
- x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2)
- x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40)
- return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40)
-
-
-class Expand(nn.Module):
- # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)
- def __init__(self, gain=2):
- super().__init__()
- self.gain = gain
-
- def forward(self, x):
- b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain'
- s = self.gain
- x = x.view(b, s, s, c // s**2, h, w) # x(1,2,2,16,80,80)
- x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2)
- return x.view(b, c // s**2, h * s, w * s) # x(1,16,160,160)
-
-
-class Concat(nn.Module):
- # Concatenate a list of tensors along dimension
- def __init__(self, dimension=1):
- super().__init__()
- self.d = dimension
-
- def forward(self, x):
- return torch.cat(x, self.d)
-
-
-class DetectMultiBackend(nn.Module):
- # YOLOv5 MultiBackend class for python inference on various backends
- def __init__(
- self,
- weights="yolov5s.pt",
- device=torch.device("cpu"),
- dnn=False,
- data=None,
- fp16=False,
- fuse=True,
- ):
- # Usage:
- # PyTorch: weights = *.pt
- # TorchScript: *.torchscript
- # ONNX Runtime: *.onnx
- # ONNX OpenCV DNN: *.onnx --dnn
- # OpenVINO: *_openvino_model
- # CoreML: *.mlmodel
- # TensorRT: *.engine
- # TensorFlow SavedModel: *_saved_model
- # TensorFlow GraphDef: *.pb
- # TensorFlow Lite: *.tflite
- # TensorFlow Edge TPU: *_edgetpu.tflite
- # PaddlePaddle: *_paddle_model
- from models.experimental import ( # scoped to avoid circular import
- attempt_download,
- attempt_load,
- )
-
- super().__init__()
- w = str(weights[0] if isinstance(weights, list) else weights)
- (
- pt,
- jit,
- onnx,
- xml,
- engine,
- coreml,
- saved_model,
- pb,
- tflite,
- edgetpu,
- tfjs,
- paddle,
- triton,
- ) = self._model_type(w)
- fp16 &= pt or jit or onnx or engine # FP16
- nhwc = (
- coreml or saved_model or pb or tflite or edgetpu
- ) # BHWC formats (vs torch BCWH)
- stride = 32 # default stride
- cuda = torch.cuda.is_available() and device.type != "cpu" # use CUDA
- if not (pt or triton):
- w = attempt_download(w) # download if not local
-
- if pt: # PyTorch
- model = attempt_load(
- weights if isinstance(weights, list) else w,
- device=device,
- inplace=True,
- fuse=fuse,
- )
- stride = max(int(model.stride.max()), 32) # model stride
- names = (
- model.module.names if hasattr(model, "module") else model.names
- ) # get class names
- model.half() if fp16 else model.float()
- self.model = (
- model # explicitly assign for to(), cpu(), cuda(), half()
- )
- elif jit: # TorchScript
- LOGGER.info(f"Loading {w} for TorchScript inference...")
- extra_files = {"config.txt": ""} # model metadata
- model = torch.jit.load(
- w, _extra_files=extra_files, map_location=device
- )
- model.half() if fp16 else model.float()
- if extra_files["config.txt"]: # load metadata dict
- d = json.loads(
- extra_files["config.txt"],
- object_hook=lambda d: {
- int(k) if k.isdigit() else k: v for k, v in d.items()
- },
- )
- stride, names = int(d["stride"]), d["names"]
- elif dnn: # ONNX OpenCV DNN
- LOGGER.info(f"Loading {w} for ONNX OpenCV DNN inference...")
- check_requirements("opencv-python>=4.5.4")
- net = cv2.dnn.readNetFromONNX(w)
- elif onnx: # ONNX Runtime
- LOGGER.info(f"Loading {w} for ONNX Runtime inference...")
- check_requirements(
- ("onnx", "onnxruntime-gpu" if cuda else "onnxruntime")
- )
- import onnxruntime
-
- providers = (
- ["CUDAExecutionProvider", "CPUExecutionProvider"]
- if cuda
- else ["CPUExecutionProvider"]
- )
- session = onnxruntime.InferenceSession(w, providers=providers)
- output_names = [x.name for x in session.get_outputs()]
- meta = session.get_modelmeta().custom_metadata_map # metadata
- if "stride" in meta:
- stride, names = int(meta["stride"]), eval(meta["names"])
- elif xml: # OpenVINO
- LOGGER.info(f"Loading {w} for OpenVINO inference...")
- check_requirements(
- "openvino"
- ) # requires openvino-dev: https://pypi.org/project/openvino-dev/
- from openvino.runtime import Core, Layout, get_batch
-
- ie = Core()
- if not Path(w).is_file(): # if not *.xml
- w = next(
- Path(w).glob("*.xml")
- ) # get *.xml file from *_openvino_model dir
- network = ie.read_model(
- model=w, weights=Path(w).with_suffix(".bin")
- )
- if network.get_parameters()[0].get_layout().empty:
- network.get_parameters()[0].set_layout(Layout("NCHW"))
- batch_dim = get_batch(network)
- if batch_dim.is_static:
- batch_size = batch_dim.get_length()
- executable_network = ie.compile_model(
- network, device_name="CPU"
- ) # device_name="MYRIAD" for Intel NCS2
- stride, names = self._load_metadata(
- Path(w).with_suffix(".yaml")
- ) # load metadata
- elif engine: # TensorRT
- LOGGER.info(f"Loading {w} for TensorRT inference...")
- import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download
-
- check_version(
- trt.__version__, "7.0.0", hard=True
- ) # require tensorrt>=7.0.0
- if device.type == "cpu":
- device = torch.device("cuda:0")
- Binding = namedtuple(
- "Binding", ("name", "dtype", "shape", "data", "ptr")
- )
- logger = trt.Logger(trt.Logger.INFO)
- with open(w, "rb") as f, trt.Runtime(logger) as runtime:
- model = runtime.deserialize_cuda_engine(f.read())
- context = model.create_execution_context()
- bindings = OrderedDict()
- output_names = []
- fp16 = False # default updated below
- dynamic = False
- for i in range(model.num_bindings):
- name = model.get_binding_name(i)
- dtype = trt.nptype(model.get_binding_dtype(i))
- if model.binding_is_input(i):
- if -1 in tuple(model.get_binding_shape(i)): # dynamic
- dynamic = True
- context.set_binding_shape(
- i, tuple(model.get_profile_shape(0, i)[2])
- )
- if dtype == np.float16:
- fp16 = True
- else: # output
- output_names.append(name)
- shape = tuple(context.get_binding_shape(i))
- im = torch.from_numpy(np.empty(shape, dtype=dtype)).to(device)
- bindings[name] = Binding(
- name, dtype, shape, im, int(im.data_ptr())
- )
- binding_addrs = OrderedDict(
- (n, d.ptr) for n, d in bindings.items()
- )
- batch_size = bindings["images"].shape[
- 0
- ] # if dynamic, this is instead max batch size
- elif coreml: # CoreML
- LOGGER.info(f"Loading {w} for CoreML inference...")
- import coremltools as ct
-
- model = ct.models.MLModel(w)
- elif saved_model: # TF SavedModel
- LOGGER.info(f"Loading {w} for TensorFlow SavedModel inference...")
- import tensorflow as tf
-
- keras = False # assume TF1 saved_model
- model = (
- tf.keras.models.load_model(w)
- if keras
- else tf.saved_model.load(w)
- )
- elif (
- pb
- ): # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt
- LOGGER.info(f"Loading {w} for TensorFlow GraphDef inference...")
- import tensorflow as tf
-
- def wrap_frozen_graph(gd, inputs, outputs):
- x = tf.compat.v1.wrap_function(
- lambda: tf.compat.v1.import_graph_def(gd, name=""), []
- ) # wrapped
- ge = x.graph.as_graph_element
- return x.prune(
- tf.nest.map_structure(ge, inputs),
- tf.nest.map_structure(ge, outputs),
- )
-
- def gd_outputs(gd):
- name_list, input_list = [], []
- for (
- node
- ) in gd.node: # tensorflow.core.framework.node_def_pb2.NodeDef
- name_list.append(node.name)
- input_list.extend(node.input)
- return sorted(
- f"{x}:0"
- for x in list(set(name_list) - set(input_list))
- if not x.startswith("NoOp")
- )
-
- gd = tf.Graph().as_graph_def() # TF GraphDef
- with open(w, "rb") as f:
- gd.ParseFromString(f.read())
- frozen_func = wrap_frozen_graph(
- gd, inputs="x:0", outputs=gd_outputs(gd)
- )
- elif (
- tflite or edgetpu
- ): # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python
- try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu
- from tflite_runtime.interpreter import Interpreter, load_delegate
- except ImportError:
- import tensorflow as tf
-
- Interpreter, load_delegate = (
- tf.lite.Interpreter,
- tf.lite.experimental.load_delegate,
- )
- if (
- edgetpu
- ): # TF Edge TPU https://coral.ai/software/#edgetpu-runtime
- LOGGER.info(
- f"Loading {w} for TensorFlow Lite Edge TPU inference..."
- )
- delegate = {
- "Linux": "libedgetpu.so.1",
- "Darwin": "libedgetpu.1.dylib",
- "Windows": "edgetpu.dll",
- }[platform.system()]
- interpreter = Interpreter(
- model_path=w,
- experimental_delegates=[load_delegate(delegate)],
- )
- else: # TFLite
- LOGGER.info(f"Loading {w} for TensorFlow Lite inference...")
- interpreter = Interpreter(model_path=w) # load TFLite model
- interpreter.allocate_tensors() # allocate
- input_details = interpreter.get_input_details() # inputs
- output_details = interpreter.get_output_details() # outputs
- # load metadata
- with contextlib.suppress(zipfile.BadZipFile):
- with zipfile.ZipFile(w, "r") as model:
- meta_file = model.namelist()[0]
- meta = ast.literal_eval(
- model.read(meta_file).decode("utf-8")
- )
- stride, names = int(meta["stride"]), meta["names"]
- elif tfjs: # TF.js
- raise NotImplementedError(
- "ERROR: YOLOv5 TF.js inference is not supported"
- )
- elif paddle: # PaddlePaddle
- LOGGER.info(f"Loading {w} for PaddlePaddle inference...")
- check_requirements("paddlepaddle-gpu" if cuda else "paddlepaddle")
- import paddle.inference as pdi
-
- if not Path(w).is_file(): # if not *.pdmodel
- w = next(
- Path(w).rglob("*.pdmodel")
- ) # get *.pdmodel file from *_paddle_model dir
- weights = Path(w).with_suffix(".pdiparams")
- config = pdi.Config(str(w), str(weights))
- if cuda:
- config.enable_use_gpu(
- memory_pool_init_size_mb=2048, device_id=0
- )
- predictor = pdi.create_predictor(config)
- input_handle = predictor.get_input_handle(
- predictor.get_input_names()[0]
- )
- output_names = predictor.get_output_names()
- elif triton: # NVIDIA Triton Inference Server
- LOGGER.info(f"Using {w} as Triton Inference Server...")
- check_requirements("tritonclient[all]")
- from utils.triton import TritonRemoteModel
-
- model = TritonRemoteModel(url=w)
- nhwc = model.runtime.startswith("tensorflow")
- else:
- raise NotImplementedError(f"ERROR: {w} is not a supported format")
-
- # class names
- if "names" not in locals():
- names = (
- yaml_load(data)["names"]
- if data
- else {i: f"class{i}" for i in range(999)}
- )
- if names[0] == "n01440764" and len(names) == 1000: # ImageNet
- names = yaml_load(ROOT / "data/ImageNet.yaml")[
- "names"
- ] # human-readable names
-
- self.__dict__.update(locals()) # assign all variables to self
-
- def forward(self, im, augment=False, visualize=False):
- # YOLOv5 MultiBackend inference
- b, ch, h, w = im.shape # batch, channel, height, width
- if self.fp16 and im.dtype != torch.float16:
- im = im.half() # to FP16
- if self.nhwc:
- im = im.permute(
- 0, 2, 3, 1
- ) # torch BCHW to numpy BHWC shape(1,320,192,3)
-
- if self.pt: # PyTorch
- y = (
- self.model(im, augment=augment, visualize=visualize)
- if augment or visualize
- else self.model(im)
- )
- elif self.jit: # TorchScript
- y = self.model(im)
- elif self.dnn: # ONNX OpenCV DNN
- im = im.cpu().numpy() # torch to numpy
- self.net.setInput(im)
- y = self.net.forward()
- elif self.onnx: # ONNX Runtime
- im = im.cpu().numpy() # torch to numpy
- y = self.session.run(
- self.output_names, {self.session.get_inputs()[0].name: im}
- )
- elif self.xml: # OpenVINO
- im = im.cpu().numpy() # FP32
- y = list(self.executable_network([im]).values())
- elif self.engine: # TensorRT
- if self.dynamic and im.shape != self.bindings["images"].shape:
- i = self.model.get_binding_index("images")
- self.context.set_binding_shape(
- i, im.shape
- ) # reshape if dynamic
- self.bindings["images"] = self.bindings["images"]._replace(
- shape=im.shape
- )
- for name in self.output_names:
- i = self.model.get_binding_index(name)
- self.bindings[name].data.resize_(
- tuple(self.context.get_binding_shape(i))
- )
- s = self.bindings["images"].shape
- assert (
- im.shape == s
- ), f"input size {im.shape} {'>' if self.dynamic else 'not equal to'} max model size {s}"
- self.binding_addrs["images"] = int(im.data_ptr())
- self.context.execute_v2(list(self.binding_addrs.values()))
- y = [self.bindings[x].data for x in sorted(self.output_names)]
- elif self.coreml: # CoreML
- im = im.cpu().numpy()
- im = Image.fromarray((im[0] * 255).astype("uint8"))
- # im = im.resize((192, 320), Image.ANTIALIAS)
- y = self.model.predict(
- {"image": im}
- ) # coordinates are xywh normalized
- if "confidence" in y:
- box = xywh2xyxy(
- y["coordinates"] * [[w, h, w, h]]
- ) # xyxy pixels
- conf, cls = y["confidence"].max(1), y["confidence"].argmax(
- 1
- ).astype(np.float)
- y = np.concatenate(
- (box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1
- )
- else:
- y = list(
- reversed(y.values())
- ) # reversed for segmentation models (pred, proto)
- elif self.paddle: # PaddlePaddle
- im = im.cpu().numpy().astype(np.float32)
- self.input_handle.copy_from_cpu(im)
- self.predictor.run()
- y = [
- self.predictor.get_output_handle(x).copy_to_cpu()
- for x in self.output_names
- ]
- elif self.triton: # NVIDIA Triton Inference Server
- y = self.model(im)
- else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU)
- im = im.cpu().numpy()
- if self.saved_model: # SavedModel
- y = (
- self.model(im, training=False)
- if self.keras
- else self.model(im)
- )
- elif self.pb: # GraphDef
- y = self.frozen_func(x=self.tf.constant(im))
- else: # Lite or Edge TPU
- input = self.input_details[0]
- int8 = (
- input["dtype"] == np.uint8
- ) # is TFLite quantized uint8 model
- if int8:
- scale, zero_point = input["quantization"]
- im = (im / scale + zero_point).astype(np.uint8) # de-scale
- self.interpreter.set_tensor(input["index"], im)
- self.interpreter.invoke()
- y = []
- for output in self.output_details:
- x = self.interpreter.get_tensor(output["index"])
- if int8:
- scale, zero_point = output["quantization"]
- x = (
- x.astype(np.float32) - zero_point
- ) * scale # re-scale
- y.append(x)
- y = [x if isinstance(x, np.ndarray) else x.numpy() for x in y]
- y[0][..., :4] *= [w, h, w, h] # xywh normalized to pixels
-
- if isinstance(y, (list, tuple)):
- return (
- self.from_numpy(y[0])
- if len(y) == 1
- else [self.from_numpy(x) for x in y]
- )
- else:
- return self.from_numpy(y)
-
- def from_numpy(self, x):
- return (
- torch.from_numpy(x).to(self.device)
- if isinstance(x, np.ndarray)
- else x
- )
-
- def warmup(self, imgsz=(1, 3, 640, 640)):
- # Warmup model by running inference once
- warmup_types = (
- self.pt,
- self.jit,
- self.onnx,
- self.engine,
- self.saved_model,
- self.pb,
- self.triton,
- )
- if any(warmup_types) and (self.device.type != "cpu" or self.triton):
- im = torch.empty(
- *imgsz,
- dtype=torch.half if self.fp16 else torch.float,
- device=self.device,
- ) # input
- for _ in range(2 if self.jit else 1): #
- self.forward(im) # warmup
-
- @staticmethod
- def _model_type(p="path/to/model.pt"):
- # Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx
- # types = [pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle]
- from export import export_formats
- from utils.downloads import is_url
-
- sf = list(export_formats().Suffix) # export suffixes
- if not is_url(p, check=False):
- check_suffix(p, sf) # checks
- url = urlparse(p) # if url may be Triton inference server
- types = [s in Path(p).name for s in sf]
- types[8] &= not types[9] # tflite &= not edgetpu
- triton = not any(types) and all(
- [any(s in url.scheme for s in ["http", "grpc"]), url.netloc]
- )
- return types + [triton]
-
- @staticmethod
- def _load_metadata(f=Path("path/to/meta.yaml")):
- # Load metadata from meta.yaml if it exists
- if f.exists():
- d = yaml_load(f)
- return d["stride"], d["names"] # assign stride, names
- return None, None
-
-
-class AutoShape(nn.Module):
- # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
- conf = 0.25 # NMS confidence threshold
- iou = 0.45 # NMS IoU threshold
- agnostic = False # NMS class-agnostic
- multi_label = False # NMS multiple labels per box
- classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
- max_det = 1000 # maximum number of detections per image
- amp = False # Automatic Mixed Precision (AMP) inference
-
- def __init__(self, model, verbose=True):
- super().__init__()
- if verbose:
- LOGGER.info("Adding AutoShape... ")
- copy_attr(
- self,
- model,
- include=("yaml", "nc", "hyp", "names", "stride", "abc"),
- exclude=(),
- ) # copy attributes
- self.dmb = isinstance(
- model, DetectMultiBackend
- ) # DetectMultiBackend() instance
- self.pt = not self.dmb or model.pt # PyTorch model
- self.model = model.eval()
- if self.pt:
- m = (
- self.model.model.model[-1]
- if self.dmb
- else self.model.model[-1]
- ) # Detect()
- m.inplace = (
- False # Detect.inplace=False for safe multithread inference
- )
- m.export = True # do not output loss values
-
- def _apply(self, fn):
- # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
- self = super()._apply(fn)
- if self.pt:
- m = (
- self.model.model.model[-1]
- if self.dmb
- else self.model.model[-1]
- ) # Detect()
- m.stride = fn(m.stride)
- m.grid = list(map(fn, m.grid))
- if isinstance(m.anchor_grid, list):
- m.anchor_grid = list(map(fn, m.anchor_grid))
- return self
-
- @smart_inference_mode()
- def forward(self, ims, size=640, augment=False, profile=False):
- # Inference from various sources. For size(height=640, width=1280), RGB images example inputs are:
- # file: ims = 'data/images/zidane.jpg' # str or PosixPath
- # URI: = 'https://ultralytics.com/images/zidane.jpg'
- # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
- # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3)
- # numpy: = np.zeros((640,1280,3)) # HWC
- # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
- # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images
-
- dt = (Profile(), Profile(), Profile())
- with dt[0]:
- if isinstance(size, int): # expand
- size = (size, size)
- p = (
- next(self.model.parameters())
- if self.pt
- else torch.empty(1, device=self.model.device)
- ) # param
- autocast = self.amp and (
- p.device.type != "cpu"
- ) # Automatic Mixed Precision (AMP) inference
- if isinstance(ims, torch.Tensor): # torch
- with amp.autocast(autocast):
- return self.model(
- ims.to(p.device).type_as(p), augment=augment
- ) # inference
-
- # Pre-process
- n, ims = (
- (len(ims), list(ims))
- if isinstance(ims, (list, tuple))
- else (1, [ims])
- ) # number, list of images
- shape0, shape1, files = (
- [],
- [],
- [],
- ) # image and inference shapes, filenames
- for i, im in enumerate(ims):
- f = f"image{i}" # filename
- if isinstance(im, (str, Path)): # filename or uri
- im, f = (
- Image.open(
- requests.get(im, stream=True).raw
- if str(im).startswith("http")
- else im
- ),
- im,
- )
- im = np.asarray(exif_transpose(im))
- elif isinstance(im, Image.Image): # PIL Image
- im, f = (
- np.asarray(exif_transpose(im)),
- getattr(im, "filename", f) or f,
- )
- files.append(Path(f).with_suffix(".jpg").name)
- if im.shape[0] < 5: # image in CHW
- im = im.transpose(
- (1, 2, 0)
- ) # reverse dataloader .transpose(2, 0, 1)
- im = (
- im[..., :3]
- if im.ndim == 3
- else cv2.cvtColor(im, cv2.COLOR_GRAY2BGR)
- ) # enforce 3ch input
- s = im.shape[:2] # HWC
- shape0.append(s) # image shape
- g = max(size) / max(s) # gain
- shape1.append([int(y * g) for y in s])
- ims[i] = (
- im if im.data.contiguous else np.ascontiguousarray(im)
- ) # update
- shape1 = [
- make_divisible(x, self.stride) for x in np.array(shape1).max(0)
- ] # inf shape
- x = [letterbox(im, shape1, auto=False)[0] for im in ims] # pad
- x = np.ascontiguousarray(
- np.array(x).transpose((0, 3, 1, 2))
- ) # stack and BHWC to BCHW
- x = (
- torch.from_numpy(x).to(p.device).type_as(p) / 255
- ) # uint8 to fp16/32
-
- with amp.autocast(autocast):
- # Inference
- with dt[1]:
- y = self.model(x, augment=augment) # forward
-
- # Post-process
- with dt[2]:
- y = non_max_suppression(
- y if self.dmb else y[0],
- self.conf,
- self.iou,
- self.classes,
- self.agnostic,
- self.multi_label,
- max_det=self.max_det,
- ) # NMS
- for i in range(n):
- scale_boxes(shape1, y[i][:, :4], shape0[i])
-
- return Detections(ims, y, files, dt, self.names, x.shape)
-
-
-class Detections:
- # YOLOv5 detections class for inference results
- def __init__(
- self, ims, pred, files, times=(0, 0, 0), names=None, shape=None
- ):
- super().__init__()
- d = pred[0].device # device
- gn = [
- torch.tensor(
- [*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d
- )
- for im in ims
- ] # normalizations
- self.ims = ims # list of images as numpy arrays
- self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls)
- self.names = names # class names
- self.files = files # image filenames
- self.times = times # profiling times
- self.xyxy = pred # xyxy pixels
- self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels
- self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized
- self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized
- self.n = len(self.pred) # number of images (batch size)
- self.t = tuple(x.t / self.n * 1e3 for x in times) # timestamps (ms)
- self.s = tuple(shape) # inference BCHW shape
-
- def _run(
- self,
- pprint=False,
- show=False,
- save=False,
- crop=False,
- render=False,
- labels=True,
- save_dir=Path(""),
- ):
- s, crops = "", []
- for i, (im, pred) in enumerate(zip(self.ims, self.pred)):
- s += f"\nimage {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} " # string
- if pred.shape[0]:
- for c in pred[:, -1].unique():
- n = (pred[:, -1] == c).sum() # detections per class
- s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string
- s = s.rstrip(", ")
- if show or save or render or crop:
- annotator = Annotator(im, example=str(self.names))
- for *box, conf, cls in reversed(
- pred
- ): # xyxy, confidence, class
- label = f"{self.names[int(cls)]} {conf:.2f}"
- if crop:
- file = (
- save_dir
- / "crops"
- / self.names[int(cls)]
- / self.files[i]
- if save
- else None
- )
- crops.append(
- {
- "box": box,
- "conf": conf,
- "cls": cls,
- "label": label,
- "im": save_one_box(
- box, im, file=file, save=save
- ),
- }
- )
- else: # all others
- annotator.box_label(
- box, label if labels else "", color=colors(cls)
- )
- im = annotator.im
- else:
- s += "(no detections)"
-
- im = (
- Image.fromarray(im.astype(np.uint8))
- if isinstance(im, np.ndarray)
- else im
- ) # from np
- if show:
- display(im) if is_notebook() else im.show(self.files[i])
- if save:
- f = self.files[i]
- im.save(save_dir / f) # save
- if i == self.n - 1:
- LOGGER.info(
- f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}"
- )
- if render:
- self.ims[i] = np.asarray(im)
- if pprint:
- s = s.lstrip("\n")
- return (
- f"{s}\nSpeed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {self.s}"
- % self.t
- )
- if crop:
- if save:
- LOGGER.info(f"Saved results to {save_dir}\n")
- return crops
-
- @TryExcept("Showing images is not supported in this environment")
- def show(self, labels=True):
- self._run(show=True, labels=labels) # show results
-
- def save(self, labels=True, save_dir="runs/detect/exp", exist_ok=False):
- save_dir = increment_path(
- save_dir, exist_ok, mkdir=True
- ) # increment save_dir
- self._run(save=True, labels=labels, save_dir=save_dir) # save results
-
- def crop(self, save=True, save_dir="runs/detect/exp", exist_ok=False):
- save_dir = (
- increment_path(save_dir, exist_ok, mkdir=True) if save else None
- )
- return self._run(
- crop=True, save=save, save_dir=save_dir
- ) # crop results
-
- def render(self, labels=True):
- self._run(render=True, labels=labels) # render results
- return self.ims
-
- def pandas(self):
- # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])
- new = copy(self) # return copy
- ca = (
- "xmin",
- "ymin",
- "xmax",
- "ymax",
- "confidence",
- "class",
- "name",
- ) # xyxy columns
- cb = (
- "xcenter",
- "ycenter",
- "width",
- "height",
- "confidence",
- "class",
- "name",
- ) # xywh columns
- for k, c in zip(["xyxy", "xyxyn", "xywh", "xywhn"], [ca, ca, cb, cb]):
- a = [
- [
- x[:5] + [int(x[5]), self.names[int(x[5])]]
- for x in x.tolist()
- ]
- for x in getattr(self, k)
- ] # update
- setattr(new, k, [pd.DataFrame(x, columns=c) for x in a])
- return new
-
- def tolist(self):
- # return a list of Detections objects, i.e. 'for result in results.tolist():'
- r = range(self.n) # iterable
- x = [
- Detections(
- [self.ims[i]],
- [self.pred[i]],
- [self.files[i]],
- self.times,
- self.names,
- self.s,
- )
- for i in r
- ]
- # for d in x:
- # for k in ['ims', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:
- # setattr(d, k, getattr(d, k)[0]) # pop out of list
- return x
-
- def print(self):
- LOGGER.info(self.__str__())
-
- def __len__(self): # override len(results)
- return self.n
-
- def __str__(self): # override print(results)
- return self._run(pprint=True) # print results
-
- def __repr__(self):
- return f"YOLOv5 {self.__class__} instance\n" + self.__str__()
-
-
-class Proto(nn.Module):
- # YOLOv5 mask Proto module for segmentation models
- def __init__(
- self, c1, c_=256, c2=32
- ): # ch_in, number of protos, number of masks
- super().__init__()
- self.cv1 = Conv(c1, c_, k=3)
- self.upsample = nn.Upsample(scale_factor=2, mode="nearest")
- self.cv2 = Conv(c_, c_, k=3)
- self.cv3 = Conv(c_, c2)
-
- def forward(self, x):
- return self.cv3(self.cv2(self.upsample(self.cv1(x))))
-
-
-class Classify(nn.Module):
- # YOLOv5 classification head, i.e. x(b,c1,20,20) to x(b,c2)
- def __init__(
- self, c1, c2, k=1, s=1, p=None, g=1
- ): # ch_in, ch_out, kernel, stride, padding, groups
- super().__init__()
- c_ = 1280 # efficientnet_b0 size
- self.conv = Conv(c1, c_, k, s, autopad(k, p), g)
- self.pool = nn.AdaptiveAvgPool2d(1) # to x(b,c_,1,1)
- self.drop = nn.Dropout(p=0.0, inplace=True)
- self.linear = nn.Linear(c_, c2) # to x(b,c2)
-
- def forward(self, x):
- if isinstance(x, list):
- x = torch.cat(x, 1)
- return self.linear(self.drop(self.pool(self.conv(x)).flatten(1)))
diff --git a/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/restapi.py b/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/restapi.py
deleted file mode 100644
index 1674bda0d96db810736e3ded29c867a94d6db9e9..0000000000000000000000000000000000000000
--- a/spaces/Abhilashvj/planogram-compliance/utils/flask_rest_api/restapi.py
+++ /dev/null
@@ -1,61 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Run a Flask REST API exposing one or more YOLOv5s models
-"""
-
-import argparse
-import io
-
-import torch
-from flask import Flask, request
-from PIL import Image
-
-app = Flask(__name__)
-models = {}
-
-DETECTION_URL = "/v1/object-detection/"
-
-
-@app.route(DETECTION_URL, methods=["POST"])
-def predict(model):
- if request.method != "POST":
- return
-
- if request.files.get("image"):
- # Method 1
- # with request.files["image"] as f:
- # im = Image.open(io.BytesIO(f.read()))
-
- # Method 2
- im_file = request.files["image"]
- im_bytes = im_file.read()
- im = Image.open(io.BytesIO(im_bytes))
-
- if model in models:
- results = models[model](
- im, size=640
- ) # reduce size=320 for faster inference
- return results.pandas().xyxy[0].to_json(orient="records")
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(
- description="Flask API exposing YOLOv5 model"
- )
- parser.add_argument("--port", default=5000, type=int, help="port number")
- parser.add_argument(
- "--model",
- nargs="+",
- default=["yolov5s"],
- help="model(s) to run, i.e. --model yolov5n yolov5s",
- )
- opt = parser.parse_args()
-
- for m in opt.model:
- models[m] = torch.hub.load(
- "ultralytics/yolov5", m, force_reload=True, skip_validation=True
- )
-
- app.run(
- host="0.0.0.0", port=opt.port
- ) # debug=True causes Restarting with stat
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/overview.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/overview.md
deleted file mode 100644
index a8f4dcd4d0b06023ff3c4526416cc7947f271e15..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/schedulers/overview.md
+++ /dev/null
@@ -1,92 +0,0 @@
-
-
-# Schedulers
-
-Diffusers contains multiple pre-built schedule functions for the diffusion process.
-
-## What is a scheduler?
-
-The schedule functions, denoted *Schedulers* in the library take in the output of a trained model, a sample which the diffusion process is iterating on, and a timestep to return a denoised sample. That's why schedulers may also be called *Samplers* in other diffusion models implementations.
-
-- Schedulers define the methodology for iteratively adding noise to an image or for updating a sample based on model outputs.
- - adding noise in different manners represent the algorithmic processes to train a diffusion model by adding noise to images.
- - for inference, the scheduler defines how to update a sample based on an output from a pretrained model.
-- Schedulers are often defined by a *noise schedule* and an *update rule* to solve the differential equation solution.
-
-### Discrete versus continuous schedulers
-
-All schedulers take in a timestep to predict the updated version of the sample being diffused.
-The timesteps dictate where in the diffusion process the step is, where data is generated by iterating forward in time and inference is executed by propagating backwards through timesteps.
-Different algorithms use timesteps that can be discrete (accepting `int` inputs), such as the [`DDPMScheduler`] or [`PNDMScheduler`], or continuous (accepting `float` inputs), such as the score-based schedulers [`ScoreSdeVeScheduler`] or [`ScoreSdeVpScheduler`].
-
-## Designing Re-usable schedulers
-
-The core design principle between the schedule functions is to be model, system, and framework independent.
-This allows for rapid experimentation and cleaner abstractions in the code, where the model prediction is separated from the sample update.
-To this end, the design of schedulers is such that:
-
-- Schedulers can be used interchangeably between diffusion models in inference to find the preferred trade-off between speed and generation quality.
-- Schedulers are currently by default in PyTorch, but are designed to be framework independent (partial Jax support currently exists).
-- Many diffusion pipelines, such as [`StableDiffusionPipeline`] and [`DiTPipeline`] can use any of [`KarrasDiffusionSchedulers`]
-
-## Schedulers Summary
-
-The following table summarizes all officially supported schedulers, their corresponding paper
-
-| Scheduler | Paper |
-|---|---|
-| [ddim](./ddim) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) |
-| [ddim_inverse](./ddim_inverse) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) |
-| [ddpm](./ddpm) | [**Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2006.11239) |
-| [deis](./deis) | [**DEISMultistepScheduler**](https://arxiv.org/abs/2204.13902) |
-| [singlestep_dpm_solver](./singlestep_dpm_solver) | [**Singlestep DPM-Solver**](https://arxiv.org/abs/2206.00927) |
-| [multistep_dpm_solver](./multistep_dpm_solver) | [**Multistep DPM-Solver**](https://arxiv.org/abs/2206.00927) |
-| [heun](./heun) | [**Heun scheduler inspired by Karras et. al paper**](https://arxiv.org/abs/2206.00364) |
-| [dpm_discrete](./dpm_discrete) | [**DPM Discrete Scheduler inspired by Karras et. al paper**](https://arxiv.org/abs/2206.00364) |
-| [dpm_discrete_ancestral](./dpm_discrete_ancestral) | [**DPM Discrete Scheduler with ancestral sampling inspired by Karras et. al paper**](https://arxiv.org/abs/2206.00364) |
-| [stochastic_karras_ve](./stochastic_karras_ve) | [**Variance exploding, stochastic sampling from Karras et. al**](https://arxiv.org/abs/2206.00364) |
-| [lms_discrete](./lms_discrete) | [**Linear multistep scheduler for discrete beta schedules**](https://arxiv.org/abs/2206.00364) |
-| [pndm](./pndm) | [**Pseudo numerical methods for diffusion models (PNDM)**](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181) |
-| [score_sde_ve](./score_sde_ve) | [**variance exploding stochastic differential equation (VE-SDE) scheduler**](https://arxiv.org/abs/2011.13456) |
-| [ipndm](./ipndm) | [**improved pseudo numerical methods for diffusion models (iPNDM)**](https://github.com/crowsonkb/v-diffusion-pytorch/blob/987f8985e38208345c1959b0ea767a625831cc9b/diffusion/sampling.py#L296) |
-| [score_sde_vp](./score_sde_vp) | [**Variance preserving stochastic differential equation (VP-SDE) scheduler**](https://arxiv.org/abs/2011.13456) |
-| [euler](./euler) | [**Euler scheduler**](https://arxiv.org/abs/2206.00364) |
-| [euler_ancestral](./euler_ancestral) | [**Euler Ancestral scheduler**](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72) |
-| [vq_diffusion](./vq_diffusion) | [**VQDiffusionScheduler**](https://arxiv.org/abs/2111.14822) |
-| [unipc](./unipc) | [**UniPCMultistepScheduler**](https://arxiv.org/abs/2302.04867) |
-| [repaint](./repaint) | [**RePaint scheduler**](https://arxiv.org/abs/2201.09865) |
-
-## API
-
-The core API for any new scheduler must follow a limited structure.
-- Schedulers should provide one or more `def step(...)` functions that should be called to update the generated sample iteratively.
-- Schedulers should provide a `set_timesteps(...)` method that configures the parameters of a schedule function for a specific inference task.
-- Schedulers should be framework-specific.
-
-The base class [`SchedulerMixin`] implements low level utilities used by multiple schedulers.
-
-### SchedulerMixin
-[[autodoc]] SchedulerMixin
-
-### SchedulerOutput
-The class [`SchedulerOutput`] contains the outputs from any schedulers `step(...)` call.
-
-[[autodoc]] schedulers.scheduling_utils.SchedulerOutput
-
-### KarrasDiffusionSchedulers
-
-`KarrasDiffusionSchedulers` encompasses the main generalization of schedulers in Diffusers. The schedulers in this class are distinguished, at a high level, by their noise sampling strategy; the type of network and scaling; and finally the training strategy or how the loss is weighed.
-
-The different schedulers, depending on the type of ODE solver, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in Diffusers. The schedulers in this class are given below:
-
-[[autodoc]] schedulers.scheduling_utils.KarrasDiffusionSchedulers
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py b/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py
deleted file mode 100644
index 6e18f71b31b9fb85a6ca7a6b05ff4d2313951750..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# model settings
-norm_cfg = dict(type='BN', requires_grad=False)
-model = dict(
- type='FasterRCNN',
- pretrained='open-mmlab://detectron2/resnet50_caffe',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=3,
- strides=(1, 2, 2),
- dilations=(1, 1, 1),
- out_indices=(2, ),
- frozen_stages=1,
- norm_cfg=norm_cfg,
- norm_eval=True,
- style='caffe'),
- rpn_head=dict(
- type='RPNHead',
- in_channels=1024,
- feat_channels=1024,
- anchor_generator=dict(
- type='AnchorGenerator',
- scales=[2, 4, 8, 16, 32],
- ratios=[0.5, 1.0, 2.0],
- strides=[16]),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[.0, .0, .0, .0],
- target_stds=[1.0, 1.0, 1.0, 1.0]),
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
- roi_head=dict(
- type='StandardRoIHead',
- shared_head=dict(
- type='ResLayer',
- depth=50,
- stage=3,
- stride=2,
- dilation=1,
- style='caffe',
- norm_cfg=norm_cfg,
- norm_eval=True),
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
- out_channels=1024,
- featmap_strides=[16]),
- bbox_head=dict(
- type='BBoxHead',
- with_avg_pool=True,
- roi_feat_size=7,
- in_channels=2048,
- num_classes=80,
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- target_means=[0., 0., 0., 0.],
- target_stds=[0.1, 0.1, 0.2, 0.2]),
- reg_class_agnostic=False,
- loss_cls=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
- loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
- # model training and testing settings
- train_cfg=dict(
- rpn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.7,
- neg_iou_thr=0.3,
- min_pos_iou=0.3,
- match_low_quality=True,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=256,
- pos_fraction=0.5,
- neg_pos_ub=-1,
- add_gt_as_proposals=False),
- allowed_border=0,
- pos_weight=-1,
- debug=False),
- rpn_proposal=dict(
- nms_pre=12000,
- max_per_img=2000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- assigner=dict(
- type='MaxIoUAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0.5,
- match_low_quality=False,
- ignore_iof_thr=-1),
- sampler=dict(
- type='RandomSampler',
- num=512,
- pos_fraction=0.25,
- neg_pos_ub=-1,
- add_gt_as_proposals=True),
- pos_weight=-1,
- debug=False)),
- test_cfg=dict(
- rpn=dict(
- nms_pre=6000,
- max_per_img=1000,
- nms=dict(type='nms', iou_threshold=0.7),
- min_bbox_size=0),
- rcnn=dict(
- score_thr=0.05,
- nms=dict(type='nms', iou_threshold=0.5),
- max_per_img=100)))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/data_processor.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/data_processor.py
deleted file mode 100644
index f019f427fe43ae6169be835679a6d07e938a2753..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/data_processor.py
+++ /dev/null
@@ -1,209 +0,0 @@
-"""
-This module is responsible for processing the corpus and feeding it into chromaDB. It will receive a corpus of text.
-It will then split it into chunks of specified length. For each of those chunks, it will append surrounding context.
-It will only include full words.
-"""
-
-import re
-import bisect
-
-import extensions.superboogav2.parameters as parameters
-
-from .data_preprocessor import TextPreprocessorBuilder, TextSummarizer
-from .chromadb import ChromaCollector
-
-def preprocess_text_no_summary(text) -> str:
- builder = TextPreprocessorBuilder(text)
- if parameters.should_to_lower():
- builder.to_lower()
-
- if parameters.should_remove_punctuation():
- builder.remove_punctuation()
-
- if parameters.should_remove_specific_pos():
- builder.remove_specific_pos()
-
- if parameters.should_remove_stopwords():
- builder.remove_stopwords
-
- if parameters.should_lemmatize():
- builder.lemmatize()
-
- if parameters.should_merge_spaces():
- builder.merge_spaces
-
- if parameters.should_strip():
- builder.strip()
-
- if parameters.get_num_conversion_strategy():
- if parameters.get_num_conversion_strategy() == parameters.NUM_TO_WORD_METHOD:
- builder.num_to_word(parameters.get_min_num_length())
- elif parameters.get_num_conversion_strategy() == parameters.NUM_TO_CHAR_METHOD:
- builder.num_to_char(parameters.get_min_num_length())
- elif parameters.get_num_conversion_strategy() == parameters.NUM_TO_CHAR_LONG_METHOD:
- builder.num_to_char_long(parameters.get_min_num_length())
-
- return builder.build()
-
-
-def preprocess_text(text) -> list[str]:
- important_sentences = TextSummarizer.process_long_text(text, parameters.get_min_num_sentences())
- return [preprocess_text_no_summary(sent) for sent in important_sentences]
-
-
-def _create_chunks_with_context(corpus, chunk_len, context_left, context_right):
- """
- This function takes a corpus of text and splits it into chunks of a specified length,
- then adds a specified amount of context to each chunk. The context is added by first
- going backwards from the start of the chunk and then going forwards from the end of the
- chunk, ensuring that the context includes only whole words and that the total context length
- does not exceed the specified limit. This function uses binary search for efficiency.
-
- Returns:
- chunks (list of str): The chunks of text.
- chunks_with_context (list of str): The chunks of text with added context.
- chunk_with_context_start_indices (list of int): The starting indices of each chunk with context in the corpus.
- """
- words = re.split('(\\s+)', corpus)
- word_start_indices = [0]
- current_index = 0
-
- for word in words:
- current_index += len(word)
- word_start_indices.append(current_index)
-
- chunks, chunk_lengths, chunk_start_indices, chunk_with_context_start_indices = [], [], [], []
- current_length = 0
- current_index = 0
- chunk = []
-
- for word in words:
- if current_length + len(word) > chunk_len:
- chunks.append(''.join(chunk))
- chunk_lengths.append(current_length)
- chunk_start_indices.append(current_index - current_length)
- chunk = [word]
- current_length = len(word)
- else:
- chunk.append(word)
- current_length += len(word)
- current_index += len(word)
-
- if chunk:
- chunks.append(''.join(chunk))
- chunk_lengths.append(current_length)
- chunk_start_indices.append(current_index - current_length)
-
- chunks_with_context = []
- for start_index, chunk_length in zip(chunk_start_indices, chunk_lengths):
- context_start_index = bisect.bisect_right(word_start_indices, start_index - context_left)
- context_end_index = bisect.bisect_left(word_start_indices, start_index + chunk_length + context_right)
-
- # Combine all the words in the context range (before, chunk, and after)
- chunk_with_context = ''.join(words[context_start_index:context_end_index])
- chunks_with_context.append(chunk_with_context)
-
- # Determine the start index of the chunk with context
- chunk_with_context_start_index = word_start_indices[context_start_index]
- chunk_with_context_start_indices.append(chunk_with_context_start_index)
-
- return chunks, chunks_with_context, chunk_with_context_start_indices
-
-
-def _clear_chunks(data_chunks, data_chunks_with_context, data_chunk_starting_indices):
- distinct_data_chunks = []
- distinct_data_chunks_with_context = []
- distinct_data_chunk_starting_indices = []
-
- seen_chunks = dict()
-
- for chunk, context, index in zip(data_chunks, data_chunks_with_context, data_chunk_starting_indices):
- # Skip the chunk if it does not contain any alphanumeric characters
- if not any(char.isalnum() for char in chunk):
- continue
-
- seen_chunk_start = seen_chunks.get(chunk)
- if seen_chunk_start:
- # If we've already seen this exact chunk, and the context around it it very close to the seen chunk, then skip it.
- if abs(seen_chunk_start-index) < parameters.get_delta_start():
- continue
-
- distinct_data_chunks.append(chunk)
- distinct_data_chunks_with_context.append(context)
- distinct_data_chunk_starting_indices.append(index)
-
- seen_chunks[chunk] = index
-
- return distinct_data_chunks, distinct_data_chunks_with_context, distinct_data_chunk_starting_indices
-
-
-def process_and_add_to_collector(corpus: str, collector: ChromaCollector, clear_collector_before_adding: bool, metadata: dict):
- # Defining variables
- chunk_lens = [int(len.strip()) for len in parameters.get_chunk_len().split(',')]
- context_len = [int(len.strip()) for len in parameters.get_context_len().split(',')]
- if len(context_len) >= 3:
- raise f"Context len has too many values: {len(context_len)}"
- if len(context_len) == 2:
- context_left = context_len[0]
- context_right = context_len[1]
- else:
- context_left = context_right = context_len[0]
-
- data_chunks = []
- data_chunks_with_context = []
- data_chunk_starting_indices = []
-
- # Handling chunk_regex
- if parameters.get_chunk_regex():
- if parameters.get_chunk_separator():
- cumulative_length = 0 # This variable will store the length of the processed corpus
- sections = corpus.split(parameters.get_chunk_separator())
- for section in sections:
- special_chunks = list(re.finditer(parameters.get_chunk_regex(), section))
- for match in special_chunks:
- chunk = match.group(0)
- start_index = match.start()
- end_index = start_index + len(chunk)
- context = section[max(0, start_index - context_left):min(len(section), end_index + context_right)]
- data_chunks.append(chunk)
- data_chunks_with_context.append(context)
- data_chunk_starting_indices.append(cumulative_length + max(0, start_index - context_left))
- cumulative_length += len(section) + len(parameters.get_chunk_separator()) # Update the length of the processed corpus
- else:
- special_chunks = list(re.finditer(parameters.get_chunk_regex(), corpus))
- for match in special_chunks:
- chunk = match.group(0)
- start_index = match.start()
- end_index = start_index + len(chunk)
- context = corpus[max(0, start_index - context_left):min(len(corpus), end_index + context_right)]
- data_chunks.append(chunk)
- data_chunks_with_context.append(context)
- data_chunk_starting_indices.append(max(0, start_index - context_left))
-
- for chunk_len in chunk_lens:
- # Breaking the data into chunks and adding those to the db
- if parameters.get_chunk_separator():
- cumulative_length = 0 # This variable will store the length of the processed corpus
- sections = corpus.split(parameters.get_chunk_separator())
- for section in sections:
- chunks, chunks_with_context, context_start_indices = _create_chunks_with_context(section, chunk_len, context_left, context_right)
- context_start_indices = [cumulative_length + i for i in context_start_indices] # Add the length of the processed corpus to each start index
- data_chunks.extend(chunks)
- data_chunks_with_context.extend(chunks_with_context)
- data_chunk_starting_indices.extend(context_start_indices)
- cumulative_length += len(section) + len(parameters.get_chunk_separator()) # Update the length of the processed corpus
- else:
- chunks, chunks_with_context, context_start_indices = _create_chunks_with_context(corpus, chunk_len, context_left, context_right)
- data_chunks.extend(chunks)
- data_chunks_with_context.extend(chunks_with_context)
- data_chunk_starting_indices.extend(context_start_indices)
-
- data_chunks = [preprocess_text_no_summary(chunk) for chunk in data_chunks]
-
- data_chunks, data_chunks_with_context, data_chunk_starting_indices = _clear_chunks(
- data_chunks, data_chunks_with_context, data_chunk_starting_indices
- )
-
- if clear_collector_before_adding:
- collector.clear()
- collector.add(data_chunks, data_chunks_with_context, data_chunk_starting_indices, [metadata]*len(data_chunks) if metadata is not None else None)
\ No newline at end of file
diff --git a/spaces/Archan/ArXivAudio/README.md b/spaces/Archan/ArXivAudio/README.md
deleted file mode 100644
index d2eacb6a8f9a3d48d7fe7f0e03def3833855704f..0000000000000000000000000000000000000000
--- a/spaces/Archan/ArXivAudio/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ArXiv Audio
-emoji: 🖨️
-colorFrom: cyan
-colorTo: red
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/filesystem.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/filesystem.py
deleted file mode 100644
index 83c2df75b963e5866b63aaf0f4446a8ca61aebce..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/filesystem.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import fnmatch
-import os
-import os.path
-import random
-import sys
-from contextlib import contextmanager
-from tempfile import NamedTemporaryFile
-from typing import Any, BinaryIO, Generator, List, Union, cast
-
-from pip._vendor.tenacity import retry, stop_after_delay, wait_fixed
-
-from pip._internal.utils.compat import get_path_uid
-from pip._internal.utils.misc import format_size
-
-
-def check_path_owner(path: str) -> bool:
- # If we don't have a way to check the effective uid of this process, then
- # we'll just assume that we own the directory.
- if sys.platform == "win32" or not hasattr(os, "geteuid"):
- return True
-
- assert os.path.isabs(path)
-
- previous = None
- while path != previous:
- if os.path.lexists(path):
- # Check if path is writable by current user.
- if os.geteuid() == 0:
- # Special handling for root user in order to handle properly
- # cases where users use sudo without -H flag.
- try:
- path_uid = get_path_uid(path)
- except OSError:
- return False
- return path_uid == 0
- else:
- return os.access(path, os.W_OK)
- else:
- previous, path = path, os.path.dirname(path)
- return False # assume we don't own the path
-
-
-@contextmanager
-def adjacent_tmp_file(path: str, **kwargs: Any) -> Generator[BinaryIO, None, None]:
- """Return a file-like object pointing to a tmp file next to path.
-
- The file is created securely and is ensured to be written to disk
- after the context reaches its end.
-
- kwargs will be passed to tempfile.NamedTemporaryFile to control
- the way the temporary file will be opened.
- """
- with NamedTemporaryFile(
- delete=False,
- dir=os.path.dirname(path),
- prefix=os.path.basename(path),
- suffix=".tmp",
- **kwargs,
- ) as f:
- result = cast(BinaryIO, f)
- try:
- yield result
- finally:
- result.flush()
- os.fsync(result.fileno())
-
-
-# Tenacity raises RetryError by default, explicitly raise the original exception
-_replace_retry = retry(reraise=True, stop=stop_after_delay(1), wait=wait_fixed(0.25))
-
-replace = _replace_retry(os.replace)
-
-
-# test_writable_dir and _test_writable_dir_win are copied from Flit,
-# with the author's agreement to also place them under pip's license.
-def test_writable_dir(path: str) -> bool:
- """Check if a directory is writable.
-
- Uses os.access() on POSIX, tries creating files on Windows.
- """
- # If the directory doesn't exist, find the closest parent that does.
- while not os.path.isdir(path):
- parent = os.path.dirname(path)
- if parent == path:
- break # Should never get here, but infinite loops are bad
- path = parent
-
- if os.name == "posix":
- return os.access(path, os.W_OK)
-
- return _test_writable_dir_win(path)
-
-
-def _test_writable_dir_win(path: str) -> bool:
- # os.access doesn't work on Windows: http://bugs.python.org/issue2528
- # and we can't use tempfile: http://bugs.python.org/issue22107
- basename = "accesstest_deleteme_fishfingers_custard_"
- alphabet = "abcdefghijklmnopqrstuvwxyz0123456789"
- for _ in range(10):
- name = basename + "".join(random.choice(alphabet) for _ in range(6))
- file = os.path.join(path, name)
- try:
- fd = os.open(file, os.O_RDWR | os.O_CREAT | os.O_EXCL)
- except FileExistsError:
- pass
- except PermissionError:
- # This could be because there's a directory with the same name.
- # But it's highly unlikely there's a directory called that,
- # so we'll assume it's because the parent dir is not writable.
- # This could as well be because the parent dir is not readable,
- # due to non-privileged user access.
- return False
- else:
- os.close(fd)
- os.unlink(file)
- return True
-
- # This should never be reached
- raise OSError("Unexpected condition testing for writable directory")
-
-
-def find_files(path: str, pattern: str) -> List[str]:
- """Returns a list of absolute paths of files beneath path, recursively,
- with filenames which match the UNIX-style shell glob pattern."""
- result: List[str] = []
- for root, _, files in os.walk(path):
- matches = fnmatch.filter(files, pattern)
- result.extend(os.path.join(root, f) for f in matches)
- return result
-
-
-def file_size(path: str) -> Union[int, float]:
- # If it's a symlink, return 0.
- if os.path.islink(path):
- return 0
- return os.path.getsize(path)
-
-
-def format_file_size(path: str) -> str:
- return format_size(file_size(path))
-
-
-def directory_size(path: str) -> Union[int, float]:
- size = 0.0
- for root, _dirs, files in os.walk(path):
- for filename in files:
- file_path = os.path.join(root, filename)
- size += file_size(file_path)
- return size
-
-
-def format_directory_size(path: str) -> str:
- return format_size(directory_size(path))
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/terminal.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/terminal.py
deleted file mode 100644
index e0bda16a236bfcf2c17068f2ff0cb8551830244a..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/terminal.py
+++ /dev/null
@@ -1,127 +0,0 @@
-"""
- pygments.formatters.terminal
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Formatter for terminal output with ANSI sequences.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pip._vendor.pygments.formatter import Formatter
-from pip._vendor.pygments.token import Keyword, Name, Comment, String, Error, \
- Number, Operator, Generic, Token, Whitespace
-from pip._vendor.pygments.console import ansiformat
-from pip._vendor.pygments.util import get_choice_opt
-
-
-__all__ = ['TerminalFormatter']
-
-
-#: Map token types to a tuple of color values for light and dark
-#: backgrounds.
-TERMINAL_COLORS = {
- Token: ('', ''),
-
- Whitespace: ('gray', 'brightblack'),
- Comment: ('gray', 'brightblack'),
- Comment.Preproc: ('cyan', 'brightcyan'),
- Keyword: ('blue', 'brightblue'),
- Keyword.Type: ('cyan', 'brightcyan'),
- Operator.Word: ('magenta', 'brightmagenta'),
- Name.Builtin: ('cyan', 'brightcyan'),
- Name.Function: ('green', 'brightgreen'),
- Name.Namespace: ('_cyan_', '_brightcyan_'),
- Name.Class: ('_green_', '_brightgreen_'),
- Name.Exception: ('cyan', 'brightcyan'),
- Name.Decorator: ('brightblack', 'gray'),
- Name.Variable: ('red', 'brightred'),
- Name.Constant: ('red', 'brightred'),
- Name.Attribute: ('cyan', 'brightcyan'),
- Name.Tag: ('brightblue', 'brightblue'),
- String: ('yellow', 'yellow'),
- Number: ('blue', 'brightblue'),
-
- Generic.Deleted: ('brightred', 'brightred'),
- Generic.Inserted: ('green', 'brightgreen'),
- Generic.Heading: ('**', '**'),
- Generic.Subheading: ('*magenta*', '*brightmagenta*'),
- Generic.Prompt: ('**', '**'),
- Generic.Error: ('brightred', 'brightred'),
-
- Error: ('_brightred_', '_brightred_'),
-}
-
-
-class TerminalFormatter(Formatter):
- r"""
- Format tokens with ANSI color sequences, for output in a text console.
- Color sequences are terminated at newlines, so that paging the output
- works correctly.
-
- The `get_style_defs()` method doesn't do anything special since there is
- no support for common styles.
-
- Options accepted:
-
- `bg`
- Set to ``"light"`` or ``"dark"`` depending on the terminal's background
- (default: ``"light"``).
-
- `colorscheme`
- A dictionary mapping token types to (lightbg, darkbg) color names or
- ``None`` (default: ``None`` = use builtin colorscheme).
-
- `linenos`
- Set to ``True`` to have line numbers on the terminal output as well
- (default: ``False`` = no line numbers).
- """
- name = 'Terminal'
- aliases = ['terminal', 'console']
- filenames = []
-
- def __init__(self, **options):
- Formatter.__init__(self, **options)
- self.darkbg = get_choice_opt(options, 'bg',
- ['light', 'dark'], 'light') == 'dark'
- self.colorscheme = options.get('colorscheme', None) or TERMINAL_COLORS
- self.linenos = options.get('linenos', False)
- self._lineno = 0
-
- def format(self, tokensource, outfile):
- return Formatter.format(self, tokensource, outfile)
-
- def _write_lineno(self, outfile):
- self._lineno += 1
- outfile.write("%s%04d: " % (self._lineno != 1 and '\n' or '', self._lineno))
-
- def _get_color(self, ttype):
- # self.colorscheme is a dict containing usually generic types, so we
- # have to walk the tree of dots. The base Token type must be a key,
- # even if it's empty string, as in the default above.
- colors = self.colorscheme.get(ttype)
- while colors is None:
- ttype = ttype.parent
- colors = self.colorscheme.get(ttype)
- return colors[self.darkbg]
-
- def format_unencoded(self, tokensource, outfile):
- if self.linenos:
- self._write_lineno(outfile)
-
- for ttype, value in tokensource:
- color = self._get_color(ttype)
-
- for line in value.splitlines(True):
- if color:
- outfile.write(ansiformat(color, line.rstrip('\n')))
- else:
- outfile.write(line.rstrip('\n'))
- if line.endswith('\n'):
- if self.linenos:
- self._write_lineno(outfile)
- else:
- outfile.write('\n')
-
- if self.linenos:
- outfile.write("\n")
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/models.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/models.py
deleted file mode 100644
index 76e6f199c0042cec6500f53c062ff9ea1033e79d..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/models.py
+++ /dev/null
@@ -1,1034 +0,0 @@
-"""
-requests.models
-~~~~~~~~~~~~~~~
-
-This module contains the primary objects that power Requests.
-"""
-
-import datetime
-
-# Import encoding now, to avoid implicit import later.
-# Implicit import within threads may cause LookupError when standard library is in a ZIP,
-# such as in Embedded Python. See https://github.com/psf/requests/issues/3578.
-import encodings.idna # noqa: F401
-from io import UnsupportedOperation
-
-from pip._vendor.urllib3.exceptions import (
- DecodeError,
- LocationParseError,
- ProtocolError,
- ReadTimeoutError,
- SSLError,
-)
-from pip._vendor.urllib3.fields import RequestField
-from pip._vendor.urllib3.filepost import encode_multipart_formdata
-from pip._vendor.urllib3.util import parse_url
-
-from ._internal_utils import to_native_string, unicode_is_ascii
-from .auth import HTTPBasicAuth
-from .compat import (
- Callable,
- JSONDecodeError,
- Mapping,
- basestring,
- builtin_str,
- chardet,
- cookielib,
-)
-from .compat import json as complexjson
-from .compat import urlencode, urlsplit, urlunparse
-from .cookies import _copy_cookie_jar, cookiejar_from_dict, get_cookie_header
-from .exceptions import (
- ChunkedEncodingError,
- ConnectionError,
- ContentDecodingError,
- HTTPError,
- InvalidJSONError,
- InvalidURL,
-)
-from .exceptions import JSONDecodeError as RequestsJSONDecodeError
-from .exceptions import MissingSchema
-from .exceptions import SSLError as RequestsSSLError
-from .exceptions import StreamConsumedError
-from .hooks import default_hooks
-from .status_codes import codes
-from .structures import CaseInsensitiveDict
-from .utils import (
- check_header_validity,
- get_auth_from_url,
- guess_filename,
- guess_json_utf,
- iter_slices,
- parse_header_links,
- requote_uri,
- stream_decode_response_unicode,
- super_len,
- to_key_val_list,
-)
-
-#: The set of HTTP status codes that indicate an automatically
-#: processable redirect.
-REDIRECT_STATI = (
- codes.moved, # 301
- codes.found, # 302
- codes.other, # 303
- codes.temporary_redirect, # 307
- codes.permanent_redirect, # 308
-)
-
-DEFAULT_REDIRECT_LIMIT = 30
-CONTENT_CHUNK_SIZE = 10 * 1024
-ITER_CHUNK_SIZE = 512
-
-
-class RequestEncodingMixin:
- @property
- def path_url(self):
- """Build the path URL to use."""
-
- url = []
-
- p = urlsplit(self.url)
-
- path = p.path
- if not path:
- path = "/"
-
- url.append(path)
-
- query = p.query
- if query:
- url.append("?")
- url.append(query)
-
- return "".join(url)
-
- @staticmethod
- def _encode_params(data):
- """Encode parameters in a piece of data.
-
- Will successfully encode parameters when passed as a dict or a list of
- 2-tuples. Order is retained if data is a list of 2-tuples but arbitrary
- if parameters are supplied as a dict.
- """
-
- if isinstance(data, (str, bytes)):
- return data
- elif hasattr(data, "read"):
- return data
- elif hasattr(data, "__iter__"):
- result = []
- for k, vs in to_key_val_list(data):
- if isinstance(vs, basestring) or not hasattr(vs, "__iter__"):
- vs = [vs]
- for v in vs:
- if v is not None:
- result.append(
- (
- k.encode("utf-8") if isinstance(k, str) else k,
- v.encode("utf-8") if isinstance(v, str) else v,
- )
- )
- return urlencode(result, doseq=True)
- else:
- return data
-
- @staticmethod
- def _encode_files(files, data):
- """Build the body for a multipart/form-data request.
-
- Will successfully encode files when passed as a dict or a list of
- tuples. Order is retained if data is a list of tuples but arbitrary
- if parameters are supplied as a dict.
- The tuples may be 2-tuples (filename, fileobj), 3-tuples (filename, fileobj, contentype)
- or 4-tuples (filename, fileobj, contentype, custom_headers).
- """
- if not files:
- raise ValueError("Files must be provided.")
- elif isinstance(data, basestring):
- raise ValueError("Data must not be a string.")
-
- new_fields = []
- fields = to_key_val_list(data or {})
- files = to_key_val_list(files or {})
-
- for field, val in fields:
- if isinstance(val, basestring) or not hasattr(val, "__iter__"):
- val = [val]
- for v in val:
- if v is not None:
- # Don't call str() on bytestrings: in Py3 it all goes wrong.
- if not isinstance(v, bytes):
- v = str(v)
-
- new_fields.append(
- (
- field.decode("utf-8")
- if isinstance(field, bytes)
- else field,
- v.encode("utf-8") if isinstance(v, str) else v,
- )
- )
-
- for (k, v) in files:
- # support for explicit filename
- ft = None
- fh = None
- if isinstance(v, (tuple, list)):
- if len(v) == 2:
- fn, fp = v
- elif len(v) == 3:
- fn, fp, ft = v
- else:
- fn, fp, ft, fh = v
- else:
- fn = guess_filename(v) or k
- fp = v
-
- if isinstance(fp, (str, bytes, bytearray)):
- fdata = fp
- elif hasattr(fp, "read"):
- fdata = fp.read()
- elif fp is None:
- continue
- else:
- fdata = fp
-
- rf = RequestField(name=k, data=fdata, filename=fn, headers=fh)
- rf.make_multipart(content_type=ft)
- new_fields.append(rf)
-
- body, content_type = encode_multipart_formdata(new_fields)
-
- return body, content_type
-
-
-class RequestHooksMixin:
- def register_hook(self, event, hook):
- """Properly register a hook."""
-
- if event not in self.hooks:
- raise ValueError(f'Unsupported event specified, with event name "{event}"')
-
- if isinstance(hook, Callable):
- self.hooks[event].append(hook)
- elif hasattr(hook, "__iter__"):
- self.hooks[event].extend(h for h in hook if isinstance(h, Callable))
-
- def deregister_hook(self, event, hook):
- """Deregister a previously registered hook.
- Returns True if the hook existed, False if not.
- """
-
- try:
- self.hooks[event].remove(hook)
- return True
- except ValueError:
- return False
-
-
-class Request(RequestHooksMixin):
- """A user-created :class:`Request ` object.
-
- Used to prepare a :class:`PreparedRequest `, which is sent to the server.
-
- :param method: HTTP method to use.
- :param url: URL to send.
- :param headers: dictionary of headers to send.
- :param files: dictionary of {filename: fileobject} files to multipart upload.
- :param data: the body to attach to the request. If a dictionary or
- list of tuples ``[(key, value)]`` is provided, form-encoding will
- take place.
- :param json: json for the body to attach to the request (if files or data is not specified).
- :param params: URL parameters to append to the URL. If a dictionary or
- list of tuples ``[(key, value)]`` is provided, form-encoding will
- take place.
- :param auth: Auth handler or (user, pass) tuple.
- :param cookies: dictionary or CookieJar of cookies to attach to this request.
- :param hooks: dictionary of callback hooks, for internal usage.
-
- Usage::
-
- >>> import requests
- >>> req = requests.Request('GET', 'https://httpbin.org/get')
- >>> req.prepare()
-
- """
-
- def __init__(
- self,
- method=None,
- url=None,
- headers=None,
- files=None,
- data=None,
- params=None,
- auth=None,
- cookies=None,
- hooks=None,
- json=None,
- ):
-
- # Default empty dicts for dict params.
- data = [] if data is None else data
- files = [] if files is None else files
- headers = {} if headers is None else headers
- params = {} if params is None else params
- hooks = {} if hooks is None else hooks
-
- self.hooks = default_hooks()
- for (k, v) in list(hooks.items()):
- self.register_hook(event=k, hook=v)
-
- self.method = method
- self.url = url
- self.headers = headers
- self.files = files
- self.data = data
- self.json = json
- self.params = params
- self.auth = auth
- self.cookies = cookies
-
- def __repr__(self):
- return f""
-
- def prepare(self):
- """Constructs a :class:`PreparedRequest ` for transmission and returns it."""
- p = PreparedRequest()
- p.prepare(
- method=self.method,
- url=self.url,
- headers=self.headers,
- files=self.files,
- data=self.data,
- json=self.json,
- params=self.params,
- auth=self.auth,
- cookies=self.cookies,
- hooks=self.hooks,
- )
- return p
-
-
-class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
- """The fully mutable :class:`PreparedRequest ` object,
- containing the exact bytes that will be sent to the server.
-
- Instances are generated from a :class:`Request ` object, and
- should not be instantiated manually; doing so may produce undesirable
- effects.
-
- Usage::
-
- >>> import requests
- >>> req = requests.Request('GET', 'https://httpbin.org/get')
- >>> r = req.prepare()
- >>> r
-
-
- >>> s = requests.Session()
- >>> s.send(r)
-
- """
-
- def __init__(self):
- #: HTTP verb to send to the server.
- self.method = None
- #: HTTP URL to send the request to.
- self.url = None
- #: dictionary of HTTP headers.
- self.headers = None
- # The `CookieJar` used to create the Cookie header will be stored here
- # after prepare_cookies is called
- self._cookies = None
- #: request body to send to the server.
- self.body = None
- #: dictionary of callback hooks, for internal usage.
- self.hooks = default_hooks()
- #: integer denoting starting position of a readable file-like body.
- self._body_position = None
-
- def prepare(
- self,
- method=None,
- url=None,
- headers=None,
- files=None,
- data=None,
- params=None,
- auth=None,
- cookies=None,
- hooks=None,
- json=None,
- ):
- """Prepares the entire request with the given parameters."""
-
- self.prepare_method(method)
- self.prepare_url(url, params)
- self.prepare_headers(headers)
- self.prepare_cookies(cookies)
- self.prepare_body(data, files, json)
- self.prepare_auth(auth, url)
-
- # Note that prepare_auth must be last to enable authentication schemes
- # such as OAuth to work on a fully prepared request.
-
- # This MUST go after prepare_auth. Authenticators could add a hook
- self.prepare_hooks(hooks)
-
- def __repr__(self):
- return f""
-
- def copy(self):
- p = PreparedRequest()
- p.method = self.method
- p.url = self.url
- p.headers = self.headers.copy() if self.headers is not None else None
- p._cookies = _copy_cookie_jar(self._cookies)
- p.body = self.body
- p.hooks = self.hooks
- p._body_position = self._body_position
- return p
-
- def prepare_method(self, method):
- """Prepares the given HTTP method."""
- self.method = method
- if self.method is not None:
- self.method = to_native_string(self.method.upper())
-
- @staticmethod
- def _get_idna_encoded_host(host):
- from pip._vendor import idna
-
- try:
- host = idna.encode(host, uts46=True).decode("utf-8")
- except idna.IDNAError:
- raise UnicodeError
- return host
-
- def prepare_url(self, url, params):
- """Prepares the given HTTP URL."""
- #: Accept objects that have string representations.
- #: We're unable to blindly call unicode/str functions
- #: as this will include the bytestring indicator (b'')
- #: on python 3.x.
- #: https://github.com/psf/requests/pull/2238
- if isinstance(url, bytes):
- url = url.decode("utf8")
- else:
- url = str(url)
-
- # Remove leading whitespaces from url
- url = url.lstrip()
-
- # Don't do any URL preparation for non-HTTP schemes like `mailto`,
- # `data` etc to work around exceptions from `url_parse`, which
- # handles RFC 3986 only.
- if ":" in url and not url.lower().startswith("http"):
- self.url = url
- return
-
- # Support for unicode domain names and paths.
- try:
- scheme, auth, host, port, path, query, fragment = parse_url(url)
- except LocationParseError as e:
- raise InvalidURL(*e.args)
-
- if not scheme:
- raise MissingSchema(
- f"Invalid URL {url!r}: No scheme supplied. "
- f"Perhaps you meant https://{url}?"
- )
-
- if not host:
- raise InvalidURL(f"Invalid URL {url!r}: No host supplied")
-
- # In general, we want to try IDNA encoding the hostname if the string contains
- # non-ASCII characters. This allows users to automatically get the correct IDNA
- # behaviour. For strings containing only ASCII characters, we need to also verify
- # it doesn't start with a wildcard (*), before allowing the unencoded hostname.
- if not unicode_is_ascii(host):
- try:
- host = self._get_idna_encoded_host(host)
- except UnicodeError:
- raise InvalidURL("URL has an invalid label.")
- elif host.startswith(("*", ".")):
- raise InvalidURL("URL has an invalid label.")
-
- # Carefully reconstruct the network location
- netloc = auth or ""
- if netloc:
- netloc += "@"
- netloc += host
- if port:
- netloc += f":{port}"
-
- # Bare domains aren't valid URLs.
- if not path:
- path = "/"
-
- if isinstance(params, (str, bytes)):
- params = to_native_string(params)
-
- enc_params = self._encode_params(params)
- if enc_params:
- if query:
- query = f"{query}&{enc_params}"
- else:
- query = enc_params
-
- url = requote_uri(urlunparse([scheme, netloc, path, None, query, fragment]))
- self.url = url
-
- def prepare_headers(self, headers):
- """Prepares the given HTTP headers."""
-
- self.headers = CaseInsensitiveDict()
- if headers:
- for header in headers.items():
- # Raise exception on invalid header value.
- check_header_validity(header)
- name, value = header
- self.headers[to_native_string(name)] = value
-
- def prepare_body(self, data, files, json=None):
- """Prepares the given HTTP body data."""
-
- # Check if file, fo, generator, iterator.
- # If not, run through normal process.
-
- # Nottin' on you.
- body = None
- content_type = None
-
- if not data and json is not None:
- # urllib3 requires a bytes-like body. Python 2's json.dumps
- # provides this natively, but Python 3 gives a Unicode string.
- content_type = "application/json"
-
- try:
- body = complexjson.dumps(json, allow_nan=False)
- except ValueError as ve:
- raise InvalidJSONError(ve, request=self)
-
- if not isinstance(body, bytes):
- body = body.encode("utf-8")
-
- is_stream = all(
- [
- hasattr(data, "__iter__"),
- not isinstance(data, (basestring, list, tuple, Mapping)),
- ]
- )
-
- if is_stream:
- try:
- length = super_len(data)
- except (TypeError, AttributeError, UnsupportedOperation):
- length = None
-
- body = data
-
- if getattr(body, "tell", None) is not None:
- # Record the current file position before reading.
- # This will allow us to rewind a file in the event
- # of a redirect.
- try:
- self._body_position = body.tell()
- except OSError:
- # This differentiates from None, allowing us to catch
- # a failed `tell()` later when trying to rewind the body
- self._body_position = object()
-
- if files:
- raise NotImplementedError(
- "Streamed bodies and files are mutually exclusive."
- )
-
- if length:
- self.headers["Content-Length"] = builtin_str(length)
- else:
- self.headers["Transfer-Encoding"] = "chunked"
- else:
- # Multi-part file uploads.
- if files:
- (body, content_type) = self._encode_files(files, data)
- else:
- if data:
- body = self._encode_params(data)
- if isinstance(data, basestring) or hasattr(data, "read"):
- content_type = None
- else:
- content_type = "application/x-www-form-urlencoded"
-
- self.prepare_content_length(body)
-
- # Add content-type if it wasn't explicitly provided.
- if content_type and ("content-type" not in self.headers):
- self.headers["Content-Type"] = content_type
-
- self.body = body
-
- def prepare_content_length(self, body):
- """Prepare Content-Length header based on request method and body"""
- if body is not None:
- length = super_len(body)
- if length:
- # If length exists, set it. Otherwise, we fallback
- # to Transfer-Encoding: chunked.
- self.headers["Content-Length"] = builtin_str(length)
- elif (
- self.method not in ("GET", "HEAD")
- and self.headers.get("Content-Length") is None
- ):
- # Set Content-Length to 0 for methods that can have a body
- # but don't provide one. (i.e. not GET or HEAD)
- self.headers["Content-Length"] = "0"
-
- def prepare_auth(self, auth, url=""):
- """Prepares the given HTTP auth data."""
-
- # If no Auth is explicitly provided, extract it from the URL first.
- if auth is None:
- url_auth = get_auth_from_url(self.url)
- auth = url_auth if any(url_auth) else None
-
- if auth:
- if isinstance(auth, tuple) and len(auth) == 2:
- # special-case basic HTTP auth
- auth = HTTPBasicAuth(*auth)
-
- # Allow auth to make its changes.
- r = auth(self)
-
- # Update self to reflect the auth changes.
- self.__dict__.update(r.__dict__)
-
- # Recompute Content-Length
- self.prepare_content_length(self.body)
-
- def prepare_cookies(self, cookies):
- """Prepares the given HTTP cookie data.
-
- This function eventually generates a ``Cookie`` header from the
- given cookies using cookielib. Due to cookielib's design, the header
- will not be regenerated if it already exists, meaning this function
- can only be called once for the life of the
- :class:`PreparedRequest ` object. Any subsequent calls
- to ``prepare_cookies`` will have no actual effect, unless the "Cookie"
- header is removed beforehand.
- """
- if isinstance(cookies, cookielib.CookieJar):
- self._cookies = cookies
- else:
- self._cookies = cookiejar_from_dict(cookies)
-
- cookie_header = get_cookie_header(self._cookies, self)
- if cookie_header is not None:
- self.headers["Cookie"] = cookie_header
-
- def prepare_hooks(self, hooks):
- """Prepares the given hooks."""
- # hooks can be passed as None to the prepare method and to this
- # method. To prevent iterating over None, simply use an empty list
- # if hooks is False-y
- hooks = hooks or []
- for event in hooks:
- self.register_hook(event, hooks[event])
-
-
-class Response:
- """The :class:`Response ` object, which contains a
- server's response to an HTTP request.
- """
-
- __attrs__ = [
- "_content",
- "status_code",
- "headers",
- "url",
- "history",
- "encoding",
- "reason",
- "cookies",
- "elapsed",
- "request",
- ]
-
- def __init__(self):
- self._content = False
- self._content_consumed = False
- self._next = None
-
- #: Integer Code of responded HTTP Status, e.g. 404 or 200.
- self.status_code = None
-
- #: Case-insensitive Dictionary of Response Headers.
- #: For example, ``headers['content-encoding']`` will return the
- #: value of a ``'Content-Encoding'`` response header.
- self.headers = CaseInsensitiveDict()
-
- #: File-like object representation of response (for advanced usage).
- #: Use of ``raw`` requires that ``stream=True`` be set on the request.
- #: This requirement does not apply for use internally to Requests.
- self.raw = None
-
- #: Final URL location of Response.
- self.url = None
-
- #: Encoding to decode with when accessing r.text.
- self.encoding = None
-
- #: A list of :class:`Response ` objects from
- #: the history of the Request. Any redirect responses will end
- #: up here. The list is sorted from the oldest to the most recent request.
- self.history = []
-
- #: Textual reason of responded HTTP Status, e.g. "Not Found" or "OK".
- self.reason = None
-
- #: A CookieJar of Cookies the server sent back.
- self.cookies = cookiejar_from_dict({})
-
- #: The amount of time elapsed between sending the request
- #: and the arrival of the response (as a timedelta).
- #: This property specifically measures the time taken between sending
- #: the first byte of the request and finishing parsing the headers. It
- #: is therefore unaffected by consuming the response content or the
- #: value of the ``stream`` keyword argument.
- self.elapsed = datetime.timedelta(0)
-
- #: The :class:`PreparedRequest ` object to which this
- #: is a response.
- self.request = None
-
- def __enter__(self):
- return self
-
- def __exit__(self, *args):
- self.close()
-
- def __getstate__(self):
- # Consume everything; accessing the content attribute makes
- # sure the content has been fully read.
- if not self._content_consumed:
- self.content
-
- return {attr: getattr(self, attr, None) for attr in self.__attrs__}
-
- def __setstate__(self, state):
- for name, value in state.items():
- setattr(self, name, value)
-
- # pickled objects do not have .raw
- setattr(self, "_content_consumed", True)
- setattr(self, "raw", None)
-
- def __repr__(self):
- return f""
-
- def __bool__(self):
- """Returns True if :attr:`status_code` is less than 400.
-
- This attribute checks if the status code of the response is between
- 400 and 600 to see if there was a client error or a server error. If
- the status code, is between 200 and 400, this will return True. This
- is **not** a check to see if the response code is ``200 OK``.
- """
- return self.ok
-
- def __nonzero__(self):
- """Returns True if :attr:`status_code` is less than 400.
-
- This attribute checks if the status code of the response is between
- 400 and 600 to see if there was a client error or a server error. If
- the status code, is between 200 and 400, this will return True. This
- is **not** a check to see if the response code is ``200 OK``.
- """
- return self.ok
-
- def __iter__(self):
- """Allows you to use a response as an iterator."""
- return self.iter_content(128)
-
- @property
- def ok(self):
- """Returns True if :attr:`status_code` is less than 400, False if not.
-
- This attribute checks if the status code of the response is between
- 400 and 600 to see if there was a client error or a server error. If
- the status code is between 200 and 400, this will return True. This
- is **not** a check to see if the response code is ``200 OK``.
- """
- try:
- self.raise_for_status()
- except HTTPError:
- return False
- return True
-
- @property
- def is_redirect(self):
- """True if this Response is a well-formed HTTP redirect that could have
- been processed automatically (by :meth:`Session.resolve_redirects`).
- """
- return "location" in self.headers and self.status_code in REDIRECT_STATI
-
- @property
- def is_permanent_redirect(self):
- """True if this Response one of the permanent versions of redirect."""
- return "location" in self.headers and self.status_code in (
- codes.moved_permanently,
- codes.permanent_redirect,
- )
-
- @property
- def next(self):
- """Returns a PreparedRequest for the next request in a redirect chain, if there is one."""
- return self._next
-
- @property
- def apparent_encoding(self):
- """The apparent encoding, provided by the charset_normalizer or chardet libraries."""
- return chardet.detect(self.content)["encoding"]
-
- def iter_content(self, chunk_size=1, decode_unicode=False):
- """Iterates over the response data. When stream=True is set on the
- request, this avoids reading the content at once into memory for
- large responses. The chunk size is the number of bytes it should
- read into memory. This is not necessarily the length of each item
- returned as decoding can take place.
-
- chunk_size must be of type int or None. A value of None will
- function differently depending on the value of `stream`.
- stream=True will read data as it arrives in whatever size the
- chunks are received. If stream=False, data is returned as
- a single chunk.
-
- If decode_unicode is True, content will be decoded using the best
- available encoding based on the response.
- """
-
- def generate():
- # Special case for urllib3.
- if hasattr(self.raw, "stream"):
- try:
- yield from self.raw.stream(chunk_size, decode_content=True)
- except ProtocolError as e:
- raise ChunkedEncodingError(e)
- except DecodeError as e:
- raise ContentDecodingError(e)
- except ReadTimeoutError as e:
- raise ConnectionError(e)
- except SSLError as e:
- raise RequestsSSLError(e)
- else:
- # Standard file-like object.
- while True:
- chunk = self.raw.read(chunk_size)
- if not chunk:
- break
- yield chunk
-
- self._content_consumed = True
-
- if self._content_consumed and isinstance(self._content, bool):
- raise StreamConsumedError()
- elif chunk_size is not None and not isinstance(chunk_size, int):
- raise TypeError(
- f"chunk_size must be an int, it is instead a {type(chunk_size)}."
- )
- # simulate reading small chunks of the content
- reused_chunks = iter_slices(self._content, chunk_size)
-
- stream_chunks = generate()
-
- chunks = reused_chunks if self._content_consumed else stream_chunks
-
- if decode_unicode:
- chunks = stream_decode_response_unicode(chunks, self)
-
- return chunks
-
- def iter_lines(
- self, chunk_size=ITER_CHUNK_SIZE, decode_unicode=False, delimiter=None
- ):
- """Iterates over the response data, one line at a time. When
- stream=True is set on the request, this avoids reading the
- content at once into memory for large responses.
-
- .. note:: This method is not reentrant safe.
- """
-
- pending = None
-
- for chunk in self.iter_content(
- chunk_size=chunk_size, decode_unicode=decode_unicode
- ):
-
- if pending is not None:
- chunk = pending + chunk
-
- if delimiter:
- lines = chunk.split(delimiter)
- else:
- lines = chunk.splitlines()
-
- if lines and lines[-1] and chunk and lines[-1][-1] == chunk[-1]:
- pending = lines.pop()
- else:
- pending = None
-
- yield from lines
-
- if pending is not None:
- yield pending
-
- @property
- def content(self):
- """Content of the response, in bytes."""
-
- if self._content is False:
- # Read the contents.
- if self._content_consumed:
- raise RuntimeError("The content for this response was already consumed")
-
- if self.status_code == 0 or self.raw is None:
- self._content = None
- else:
- self._content = b"".join(self.iter_content(CONTENT_CHUNK_SIZE)) or b""
-
- self._content_consumed = True
- # don't need to release the connection; that's been handled by urllib3
- # since we exhausted the data.
- return self._content
-
- @property
- def text(self):
- """Content of the response, in unicode.
-
- If Response.encoding is None, encoding will be guessed using
- ``charset_normalizer`` or ``chardet``.
-
- The encoding of the response content is determined based solely on HTTP
- headers, following RFC 2616 to the letter. If you can take advantage of
- non-HTTP knowledge to make a better guess at the encoding, you should
- set ``r.encoding`` appropriately before accessing this property.
- """
-
- # Try charset from content-type
- content = None
- encoding = self.encoding
-
- if not self.content:
- return ""
-
- # Fallback to auto-detected encoding.
- if self.encoding is None:
- encoding = self.apparent_encoding
-
- # Decode unicode from given encoding.
- try:
- content = str(self.content, encoding, errors="replace")
- except (LookupError, TypeError):
- # A LookupError is raised if the encoding was not found which could
- # indicate a misspelling or similar mistake.
- #
- # A TypeError can be raised if encoding is None
- #
- # So we try blindly encoding.
- content = str(self.content, errors="replace")
-
- return content
-
- def json(self, **kwargs):
- r"""Returns the json-encoded content of a response, if any.
-
- :param \*\*kwargs: Optional arguments that ``json.loads`` takes.
- :raises requests.exceptions.JSONDecodeError: If the response body does not
- contain valid json.
- """
-
- if not self.encoding and self.content and len(self.content) > 3:
- # No encoding set. JSON RFC 4627 section 3 states we should expect
- # UTF-8, -16 or -32. Detect which one to use; If the detection or
- # decoding fails, fall back to `self.text` (using charset_normalizer to make
- # a best guess).
- encoding = guess_json_utf(self.content)
- if encoding is not None:
- try:
- return complexjson.loads(self.content.decode(encoding), **kwargs)
- except UnicodeDecodeError:
- # Wrong UTF codec detected; usually because it's not UTF-8
- # but some other 8-bit codec. This is an RFC violation,
- # and the server didn't bother to tell us what codec *was*
- # used.
- pass
- except JSONDecodeError as e:
- raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
-
- try:
- return complexjson.loads(self.text, **kwargs)
- except JSONDecodeError as e:
- # Catch JSON-related errors and raise as requests.JSONDecodeError
- # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
- raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
-
- @property
- def links(self):
- """Returns the parsed header links of the response, if any."""
-
- header = self.headers.get("link")
-
- resolved_links = {}
-
- if header:
- links = parse_header_links(header)
-
- for link in links:
- key = link.get("rel") or link.get("url")
- resolved_links[key] = link
-
- return resolved_links
-
- def raise_for_status(self):
- """Raises :class:`HTTPError`, if one occurred."""
-
- http_error_msg = ""
- if isinstance(self.reason, bytes):
- # We attempt to decode utf-8 first because some servers
- # choose to localize their reason strings. If the string
- # isn't utf-8, we fall back to iso-8859-1 for all other
- # encodings. (See PR #3538)
- try:
- reason = self.reason.decode("utf-8")
- except UnicodeDecodeError:
- reason = self.reason.decode("iso-8859-1")
- else:
- reason = self.reason
-
- if 400 <= self.status_code < 500:
- http_error_msg = (
- f"{self.status_code} Client Error: {reason} for url: {self.url}"
- )
-
- elif 500 <= self.status_code < 600:
- http_error_msg = (
- f"{self.status_code} Server Error: {reason} for url: {self.url}"
- )
-
- if http_error_msg:
- raise HTTPError(http_error_msg, response=self)
-
- def close(self):
- """Releases the connection back to the pool. Once this method has been
- called the underlying ``raw`` object must not be accessed again.
-
- *Note: Should not normally need to be called explicitly.*
- """
- if not self._content_consumed:
- self.raw.close()
-
- release_conn = getattr(self.raw, "release_conn", None)
- if release_conn is not None:
- release_conn()
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/retry.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/retry.py
deleted file mode 100644
index 38988739d6406aeb5e3be903c0ea6fb82752f328..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/retry.py
+++ /dev/null
@@ -1,272 +0,0 @@
-# Copyright 2016–2021 Julien Danjou
-# Copyright 2016 Joshua Harlow
-# Copyright 2013-2014 Ray Holder
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import abc
-import re
-import typing
-
-if typing.TYPE_CHECKING:
- from pip._vendor.tenacity import RetryCallState
-
-
-class retry_base(abc.ABC):
- """Abstract base class for retry strategies."""
-
- @abc.abstractmethod
- def __call__(self, retry_state: "RetryCallState") -> bool:
- pass
-
- def __and__(self, other: "retry_base") -> "retry_all":
- return retry_all(self, other)
-
- def __or__(self, other: "retry_base") -> "retry_any":
- return retry_any(self, other)
-
-
-RetryBaseT = typing.Union[retry_base, typing.Callable[["RetryCallState"], bool]]
-
-
-class _retry_never(retry_base):
- """Retry strategy that never rejects any result."""
-
- def __call__(self, retry_state: "RetryCallState") -> bool:
- return False
-
-
-retry_never = _retry_never()
-
-
-class _retry_always(retry_base):
- """Retry strategy that always rejects any result."""
-
- def __call__(self, retry_state: "RetryCallState") -> bool:
- return True
-
-
-retry_always = _retry_always()
-
-
-class retry_if_exception(retry_base):
- """Retry strategy that retries if an exception verifies a predicate."""
-
- def __init__(self, predicate: typing.Callable[[BaseException], bool]) -> None:
- self.predicate = predicate
-
- def __call__(self, retry_state: "RetryCallState") -> bool:
- if retry_state.outcome is None:
- raise RuntimeError("__call__() called before outcome was set")
-
- if retry_state.outcome.failed:
- exception = retry_state.outcome.exception()
- if exception is None:
- raise RuntimeError("outcome failed but the exception is None")
- return self.predicate(exception)
- else:
- return False
-
-
-class retry_if_exception_type(retry_if_exception):
- """Retries if an exception has been raised of one or more types."""
-
- def __init__(
- self,
- exception_types: typing.Union[
- typing.Type[BaseException],
- typing.Tuple[typing.Type[BaseException], ...],
- ] = Exception,
- ) -> None:
- self.exception_types = exception_types
- super().__init__(lambda e: isinstance(e, exception_types))
-
-
-class retry_if_not_exception_type(retry_if_exception):
- """Retries except an exception has been raised of one or more types."""
-
- def __init__(
- self,
- exception_types: typing.Union[
- typing.Type[BaseException],
- typing.Tuple[typing.Type[BaseException], ...],
- ] = Exception,
- ) -> None:
- self.exception_types = exception_types
- super().__init__(lambda e: not isinstance(e, exception_types))
-
-
-class retry_unless_exception_type(retry_if_exception):
- """Retries until an exception is raised of one or more types."""
-
- def __init__(
- self,
- exception_types: typing.Union[
- typing.Type[BaseException],
- typing.Tuple[typing.Type[BaseException], ...],
- ] = Exception,
- ) -> None:
- self.exception_types = exception_types
- super().__init__(lambda e: not isinstance(e, exception_types))
-
- def __call__(self, retry_state: "RetryCallState") -> bool:
- if retry_state.outcome is None:
- raise RuntimeError("__call__() called before outcome was set")
-
- # always retry if no exception was raised
- if not retry_state.outcome.failed:
- return True
-
- exception = retry_state.outcome.exception()
- if exception is None:
- raise RuntimeError("outcome failed but the exception is None")
- return self.predicate(exception)
-
-
-class retry_if_exception_cause_type(retry_base):
- """Retries if any of the causes of the raised exception is of one or more types.
-
- The check on the type of the cause of the exception is done recursively (until finding
- an exception in the chain that has no `__cause__`)
- """
-
- def __init__(
- self,
- exception_types: typing.Union[
- typing.Type[BaseException],
- typing.Tuple[typing.Type[BaseException], ...],
- ] = Exception,
- ) -> None:
- self.exception_cause_types = exception_types
-
- def __call__(self, retry_state: "RetryCallState") -> bool:
- if retry_state.outcome is None:
- raise RuntimeError("__call__ called before outcome was set")
-
- if retry_state.outcome.failed:
- exc = retry_state.outcome.exception()
- while exc is not None:
- if isinstance(exc.__cause__, self.exception_cause_types):
- return True
- exc = exc.__cause__
-
- return False
-
-
-class retry_if_result(retry_base):
- """Retries if the result verifies a predicate."""
-
- def __init__(self, predicate: typing.Callable[[typing.Any], bool]) -> None:
- self.predicate = predicate
-
- def __call__(self, retry_state: "RetryCallState") -> bool:
- if retry_state.outcome is None:
- raise RuntimeError("__call__() called before outcome was set")
-
- if not retry_state.outcome.failed:
- return self.predicate(retry_state.outcome.result())
- else:
- return False
-
-
-class retry_if_not_result(retry_base):
- """Retries if the result refutes a predicate."""
-
- def __init__(self, predicate: typing.Callable[[typing.Any], bool]) -> None:
- self.predicate = predicate
-
- def __call__(self, retry_state: "RetryCallState") -> bool:
- if retry_state.outcome is None:
- raise RuntimeError("__call__() called before outcome was set")
-
- if not retry_state.outcome.failed:
- return not self.predicate(retry_state.outcome.result())
- else:
- return False
-
-
-class retry_if_exception_message(retry_if_exception):
- """Retries if an exception message equals or matches."""
-
- def __init__(
- self,
- message: typing.Optional[str] = None,
- match: typing.Optional[str] = None,
- ) -> None:
- if message and match:
- raise TypeError(f"{self.__class__.__name__}() takes either 'message' or 'match', not both")
-
- # set predicate
- if message:
-
- def message_fnc(exception: BaseException) -> bool:
- return message == str(exception)
-
- predicate = message_fnc
- elif match:
- prog = re.compile(match)
-
- def match_fnc(exception: BaseException) -> bool:
- return bool(prog.match(str(exception)))
-
- predicate = match_fnc
- else:
- raise TypeError(f"{self.__class__.__name__}() missing 1 required argument 'message' or 'match'")
-
- super().__init__(predicate)
-
-
-class retry_if_not_exception_message(retry_if_exception_message):
- """Retries until an exception message equals or matches."""
-
- def __init__(
- self,
- message: typing.Optional[str] = None,
- match: typing.Optional[str] = None,
- ) -> None:
- super().__init__(message, match)
- # invert predicate
- if_predicate = self.predicate
- self.predicate = lambda *args_, **kwargs_: not if_predicate(*args_, **kwargs_)
-
- def __call__(self, retry_state: "RetryCallState") -> bool:
- if retry_state.outcome is None:
- raise RuntimeError("__call__() called before outcome was set")
-
- if not retry_state.outcome.failed:
- return True
-
- exception = retry_state.outcome.exception()
- if exception is None:
- raise RuntimeError("outcome failed but the exception is None")
- return self.predicate(exception)
-
-
-class retry_any(retry_base):
- """Retries if any of the retries condition is valid."""
-
- def __init__(self, *retries: retry_base) -> None:
- self.retries = retries
-
- def __call__(self, retry_state: "RetryCallState") -> bool:
- return any(r(retry_state) for r in self.retries)
-
-
-class retry_all(retry_base):
- """Retries if all the retries condition are valid."""
-
- def __init__(self, *retries: retry_base) -> None:
- self.retries = retries
-
- def __call__(self, retry_state: "RetryCallState") -> bool:
- return all(r(retry_state) for r in self.retries)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extern/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extern/__init__.py
deleted file mode 100644
index d3a6dc99fe175507a94e3440da1f637f318add2f..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extern/__init__.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import importlib.util
-import sys
-
-
-class VendorImporter:
- """
- A PEP 302 meta path importer for finding optionally-vendored
- or otherwise naturally-installed packages from root_name.
- """
-
- def __init__(self, root_name, vendored_names=(), vendor_pkg=None):
- self.root_name = root_name
- self.vendored_names = set(vendored_names)
- self.vendor_pkg = vendor_pkg or root_name.replace('extern', '_vendor')
-
- @property
- def search_path(self):
- """
- Search first the vendor package then as a natural package.
- """
- yield self.vendor_pkg + '.'
- yield ''
-
- def _module_matches_namespace(self, fullname):
- """Figure out if the target module is vendored."""
- root, base, target = fullname.partition(self.root_name + '.')
- return not root and any(map(target.startswith, self.vendored_names))
-
- def load_module(self, fullname):
- """
- Iterate over the search path to locate and load fullname.
- """
- root, base, target = fullname.partition(self.root_name + '.')
- for prefix in self.search_path:
- try:
- extant = prefix + target
- __import__(extant)
- mod = sys.modules[extant]
- sys.modules[fullname] = mod
- return mod
- except ImportError:
- pass
- else:
- raise ImportError(
- "The '{target}' package is required; "
- "normally this is bundled with this package so if you get "
- "this warning, consult the packager of your "
- "distribution.".format(**locals())
- )
-
- def create_module(self, spec):
- return self.load_module(spec.name)
-
- def exec_module(self, module):
- pass
-
- def find_spec(self, fullname, path=None, target=None):
- """Return a module spec for vendored names."""
- return (
- importlib.util.spec_from_loader(fullname, self)
- if self._module_matches_namespace(fullname) else None
- )
-
- def install(self):
- """
- Install this importer into sys.meta_path if not already present.
- """
- if self not in sys.meta_path:
- sys.meta_path.append(self)
-
-
-names = (
- 'packaging', 'pyparsing', 'ordered_set', 'more_itertools', 'importlib_metadata',
- 'zipp', 'importlib_resources', 'jaraco', 'typing_extensions', 'tomli',
-)
-VendorImporter(__name__, names, 'setuptools._vendor').install()
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_coco.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_coco.py
deleted file mode 100644
index caabead5527639056daeef71027a69c47ee2ebf7..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/data/test_coco.py
+++ /dev/null
@@ -1,139 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import json
-import numpy as np
-import os
-import tempfile
-import unittest
-import pycocotools.mask as mask_util
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.data.datasets.coco import convert_to_coco_dict, load_coco_json
-from detectron2.structures import BoxMode
-
-
-def make_mask():
- """
- Makes a donut shaped binary mask.
- """
- H = 100
- W = 100
- mask = np.zeros([H, W], dtype=np.uint8)
- for x in range(W):
- for y in range(H):
- d = np.linalg.norm(np.array([W, H]) / 2 - np.array([x, y]))
- if d > 10 and d < 20:
- mask[y, x] = 1
- return mask
-
-
-def uncompressed_rle(mask):
- l = mask.flatten(order="F").tolist()
- counts = []
- p = False
- cnt = 0
- for i in l:
- if i == p:
- cnt += 1
- else:
- counts.append(cnt)
- p = i
- cnt = 1
- counts.append(cnt)
- return {"counts": counts, "size": [mask.shape[0], mask.shape[1]]}
-
-
-def make_dataset_dicts(mask, compressed: bool = True):
- """
- Returns a list of dicts that represents a single COCO data point for
- object detection. The single instance given by `mask` is represented by
- RLE, either compressed or uncompressed.
- """
- record = {}
- record["file_name"] = "test"
- record["image_id"] = 0
- record["height"] = mask.shape[0]
- record["width"] = mask.shape[1]
-
- y, x = np.nonzero(mask)
- if compressed:
- segmentation = mask_util.encode(np.asarray(mask, order="F"))
- else:
- segmentation = uncompressed_rle(mask)
- min_x = np.min(x)
- max_x = np.max(x)
- min_y = np.min(y)
- max_y = np.max(y)
- obj = {
- "bbox": [min_x, min_y, max_x, max_y],
- "bbox_mode": BoxMode.XYXY_ABS,
- "category_id": 0,
- "iscrowd": 0,
- "segmentation": segmentation,
- }
- record["annotations"] = [obj]
- return [record]
-
-
-class TestRLEToJson(unittest.TestCase):
- def test(self):
- # Make a dummy dataset.
- mask = make_mask()
- DatasetCatalog.register("test_dataset", lambda: make_dataset_dicts(mask))
- MetadataCatalog.get("test_dataset").set(thing_classes=["test_label"])
-
- # Dump to json.
- json_dict = convert_to_coco_dict("test_dataset")
- with tempfile.TemporaryDirectory() as tmpdir:
- json_file_name = os.path.join(tmpdir, "test.json")
- with open(json_file_name, "w") as f:
- json.dump(json_dict, f)
- # Load from json.
- dicts = load_coco_json(json_file_name, "")
-
- # Check the loaded mask matches the original.
- anno = dicts[0]["annotations"][0]
- loaded_mask = mask_util.decode(anno["segmentation"])
- self.assertTrue(np.array_equal(loaded_mask, mask))
- DatasetCatalog.pop("test_dataset")
- MetadataCatalog.pop("test_dataset")
-
- def test_uncompressed_RLE(self):
- mask = make_mask()
- rle = mask_util.encode(np.asarray(mask, order="F"))
- uncompressed = uncompressed_rle(mask)
- compressed = mask_util.frPyObjects(uncompressed, *rle["size"])
- self.assertEqual(rle, compressed)
-
-
-class TestConvertCOCO(unittest.TestCase):
- @staticmethod
- def generate_data():
- record = {
- "file_name": "test",
- "image_id": 0,
- "height": 100,
- "width": 100,
- "annotations": [
- {
- "bbox": [10, 10, 10, 10, 5],
- "bbox_mode": BoxMode.XYWHA_ABS,
- "category_id": 0,
- "iscrowd": 0,
- },
- {
- "bbox": [15, 15, 3, 3],
- "bbox_mode": BoxMode.XYXY_ABS,
- "category_id": 0,
- "iscrowd": 0,
- },
- ],
- }
-
- return [record]
-
- def test_convert_to_coco(self):
- DatasetCatalog.register("test_dataset", lambda: TestConvertCOCO.generate_data())
- MetadataCatalog.get("test_dataset").set(thing_classes=["test_label"])
- convert_to_coco_dict("test_dataset")
- DatasetCatalog.pop("test_dataset")
- MetadataCatalog.pop("test_dataset")
diff --git a/spaces/Bagus/speaker-verification-demo/README.md b/spaces/Bagus/speaker-verification-demo/README.md
deleted file mode 100644
index d0fbce1083bcd2fc1f32b9b318039d5bf87d51ea..0000000000000000000000000000000000000000
--- a/spaces/Bagus/speaker-verification-demo/README.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: Speaker Verification Demo
-emoji: 😻
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-title: string
-Display title for the Space
-
-emoji: string
-Space emoji (emoji-only character allowed)
-
-colorFrom: string
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-colorTo: string
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-sdk: string
-Can be either gradio or streamlit
-
-sdk_version : string
-Only applicable for streamlit SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-
-app_file: string
-Path to your main application file (which contains either gradio or streamlit Python code).
-Path is relative to the root of the repository.
-
-
-pinned: boolean Whether the Space stays on top of your list.
-
diff --git a/spaces/BalaBhaskarudu/mygenAIChatbot/README.md b/spaces/BalaBhaskarudu/mygenAIChatbot/README.md
deleted file mode 100644
index 12732841ada6d79aa78a455d9fe97d362686e92e..0000000000000000000000000000000000000000
--- a/spaces/BalaBhaskarudu/mygenAIChatbot/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: MygenAIChatbot
-emoji: 🔥
-colorFrom: pink
-colorTo: red
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/BartPoint/VoiceChange/infer_pack/onnx_inference.py b/spaces/BartPoint/VoiceChange/infer_pack/onnx_inference.py
deleted file mode 100644
index 322572820dfc75d789e40ce5bbd9415066a03979..0000000000000000000000000000000000000000
--- a/spaces/BartPoint/VoiceChange/infer_pack/onnx_inference.py
+++ /dev/null
@@ -1,139 +0,0 @@
-import onnxruntime
-import librosa
-import numpy as np
-import soundfile
-
-
-class ContentVec:
- def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None):
- print("load model(s) from {}".format(vec_path))
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
-
- def __call__(self, wav):
- return self.forward(wav)
-
- def forward(self, wav):
- feats = wav
- if feats.ndim == 2: # double channels
- feats = feats.mean(-1)
- assert feats.ndim == 1, feats.ndim
- feats = np.expand_dims(np.expand_dims(feats, 0), 0)
- onnx_input = {self.model.get_inputs()[0].name: feats}
- logits = self.model.run(None, onnx_input)[0]
- return logits.transpose(0, 2, 1)
-
-
-def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs):
- if f0_predictor == "pm":
- from infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor
-
- f0_predictor_object = PMF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "harvest":
- from infer_pack.modules.F0Predictor.HarvestF0Predictor import HarvestF0Predictor
-
- f0_predictor_object = HarvestF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "dio":
- from infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor
-
- f0_predictor_object = DioF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- else:
- raise Exception("Unknown f0 predictor")
- return f0_predictor_object
-
-
-class OnnxRVC:
- def __init__(
- self,
- model_path,
- sr=40000,
- hop_size=512,
- vec_path="vec-768-layer-12",
- device="cpu",
- ):
- vec_path = f"pretrained/{vec_path}.onnx"
- self.vec_model = ContentVec(vec_path, device)
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(model_path, providers=providers)
- self.sampling_rate = sr
- self.hop_size = hop_size
-
- def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd):
- onnx_input = {
- self.model.get_inputs()[0].name: hubert,
- self.model.get_inputs()[1].name: hubert_length,
- self.model.get_inputs()[2].name: pitch,
- self.model.get_inputs()[3].name: pitchf,
- self.model.get_inputs()[4].name: ds,
- self.model.get_inputs()[5].name: rnd,
- }
- return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16)
-
- def inference(
- self,
- raw_path,
- sid,
- f0_method="dio",
- f0_up_key=0,
- pad_time=0.5,
- cr_threshold=0.02,
- ):
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0_predictor = get_f0_predictor(
- f0_method,
- hop_length=self.hop_size,
- sampling_rate=self.sampling_rate,
- threshold=cr_threshold,
- )
- wav, sr = librosa.load(raw_path, sr=self.sampling_rate)
- org_length = len(wav)
- if org_length / sr > 50.0:
- raise RuntimeError("Reached Max Length")
-
- wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000)
- wav16k = wav16k
-
- hubert = self.vec_model(wav16k)
- hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32)
- hubert_length = hubert.shape[1]
-
- pitchf = f0_predictor.compute_f0(wav, hubert_length)
- pitchf = pitchf * 2 ** (f0_up_key / 12)
- pitch = pitchf.copy()
- f0_mel = 1127 * np.log(1 + pitch / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- pitch = np.rint(f0_mel).astype(np.int64)
-
- pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32)
- pitch = pitch.reshape(1, len(pitch))
- ds = np.array([sid]).astype(np.int64)
-
- rnd = np.random.randn(1, 192, hubert_length).astype(np.float32)
- hubert_length = np.array([hubert_length]).astype(np.int64)
-
- out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze()
- out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant")
- return out_wav[0:org_length]
diff --git a/spaces/Benson/text-generation/Examples/Bubble Sort C.md b/spaces/Benson/text-generation/Examples/Bubble Sort C.md
deleted file mode 100644
index 274c142c3242e431f851421bcc2815cc27684b03..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Bubble Sort C.md
+++ /dev/null
@@ -1,80 +0,0 @@
-
-Clasificación de burbujas en C++: Guía para principiantes
-Si usted está aprendiendo acerca de los algoritmos de clasificación, es posible que haya llegado a través de la clasificación de burbujas. La clasificación de burbujas es uno de los algoritmos de clasificación más simples e intuitivos que funciona intercambiando elementos adyacentes repetidamente si están en el orden equivocado. En este artículo, aprenderás qué es el ordenamiento de burbujas, cómo funciona, cuál es su complejidad temporal, cuáles son sus ventajas y desventajas y cómo implementarlo en C++.
-bubble sort c++
Download Zip ->>->>->> https://bltlly.com/2v6JjI
- ¿Qué es la clasificación de burbujas?
-La clasificación de burbujas es un algoritmo de clasificación que compara cada par de elementos adyacentes en una matriz y los intercambia si están en el orden equivocado. El algoritmo repite este proceso hasta que la matriz se ordena. La clasificación de burbuja de nombre viene del hecho de que los elementos más pequeños o más grandes "burbuja" al final de la matriz después de cada iteración.
- ¿Cómo funciona la clasificación de burbujas?
-Digamos que queremos ordenar una matriz de enteros en orden ascendente usando la clasificación de burbujas. Estos son los pasos que debemos seguir:
-
-- Comience desde el primer elemento del array y compárelo con el segundo elemento. Si el primer elemento es mayor que el segundo elemento, cámbielos.
-- Mover al siguiente par de elementos y compararlos. Si están en el orden equivocado, intercambiarlos.
-- Continúe este proceso hasta que lleguemos al final del array. En este punto, el elemento más grande estará en la última posición del array.
-- Repita los pasos del 1 al 3 para los elementos no clasificados restantes, excluyendo el último elemento, que ya está ordenado.
-- Detener cuando no hay más swaps o cuando la matriz está completamente ordenada.
-
- ¿Cuál es la complejidad temporal de la clasificación de burbujas?
-La complejidad de tiempo de un algoritmo mide qué tan rápido se ejecuta en función del tamaño de la entrada. Para la clasificación de burbujas, podemos analizar cuántas comparaciones e intercambios realiza en el peor de los casos, el caso promedio y los mejores escenarios.
-
-
-
-- El escenario de caso promedio para el ordenamiento de burbujas ocurre cuando el arreglo se ordena aleatoriamente. En este caso, podemos asumir que la mitad de las comparaciones resultan en swaps y la mitad no. Por lo tanto, la complejidad promedio de tiempo de caso de la clasificación de burbujas es también O(n).
-- El mejor escenario para la clasificación de burbujas ocurre cuando la matriz ya está ordenada. En este caso, solo necesitamos realizar comparaciones n-1 y sin swaps para cada iteración. Por lo tanto, el mejor caso de complejidad de tiempo de clasificación de burbujas es O(n).
-
- ¿Cuáles son las ventajas y desventajas de la clasificación de burbujas?
-La clasificación de burbujas tiene algunas ventajas y desventajas que la hacen adecuada o inadecuada para ciertas situaciones. Aquí están algunas de ellas:
-
-- Las ventajas de la clasificación de burbujas son:
- - Es fácil de entender e implementar.
- - No requiere espacio extra para almacenar valores temporales.
- - Puede detectar si la matriz ya está ordenada en una sola pasada.
-
-
-- Las desventajas de la clasificación de burbujas son:
- - Es muy lento e ineficiente para matrices grandes.
- - Realiza muchas comparaciones y cambios innecesarios incluso si la matriz está casi ordenada.
- - No es estable, lo que significa que puede cambiar el orden relativo de los elementos iguales.
-
-
-
- Cómo implementar la clasificación de burbujas en C++ Cómo implementar la clasificación de burbujas en C++?
-
Ahora que ya sabes qué es el tipo de burbuja y cómo funciona, veamos cómo implementarlo en C++. Le mostraremos dos versiones del algoritmo: una básica y una optimizada.
- Implementación básica
-
- Ejemplo de código
-
-#include
-usando namespace std; // Function to print an array void printArray(int arr[], int size) for (int i = 0; i < size; i++) cout << arr[i] <" "; cout << endl; // Function to implement bubble sort void bubbleSort(int arr[], int size) bool swapped; // Para realizar un seguimiento de swaps para (int i = 0; i < size - 1; i+++) // Bucle externo para n-1 iteraciones swapped = false; // Asumir que no hay swaps al principio para (int j = 0; j < size - i - 1; j++) // Bucle interno para comparar elementos adyacentes si (arr[j] > arr[j + 1]) // Si el elemento actual es mayor que el siguiente elemento swap(arr[j], arr[j + 1]); // Intercambiarlos usando una variable temporal swapped = true; // Set ped swapto true if (!swapped) // If no swaps occurred in this iteration break; // Break out of the loop // Código del controlador int main() int arr[] = 64, 25, 12, 22, 11, 90; // Tamaño del array int = sizeof(arr) / sizeof(arr[0]; // Tamaño del array << "Array sin clasificar: " << endl; printArray(arr, size); // Print the unsorted array bubbleSort(arr, size); // Call the bubble sort function cout << "Array ordenado: " << endl; printArray(arr, size); // Print the sorted array return 0
- Explicación de salida
-El resultado del ejemplo de código es:
-
-Matriz no clasificada: 64 34 25 12 22 11 90 Matriz ordenada: 11 12 22 25 34 64 90
-El ejemplo de código muestra cómo el algoritmo de ordenación de burbujas ordena la matriz de muestra en orden ascendente. Imprime los arrays sin clasificar y ordenados para la comparación. Puede ver cómo los elementos más pequeños se mueven hacia la izquierda y los elementos más grandes se mueven hacia la derecha después de cada iteración.
- Implementación optimizada
-
- Ejemplo de código
-
-#include
-usando namespace std; // Función para imprimir una matriz void printArray(int arr[], int size) for (int i = 0; i < size; i++) cout << arr[i] <" "; cout << endl; // Función para comprobar si una matriz está ordenada booisSorted(int arr[], size int) for (int i = 0; i < size - 1; i++) if (arr[i] > arr[i + 1]) // Si algún elemento es mayor que su siguiente elemento devuelve false; // Devuelve false return true; // Devuelve true si no se encuentra tal elemento // Función para implementar la clasificación optimizada de burbujas void bubbleSort(int arr[], int size) int lastSwapIndex; // Para almacenar el último índice donde se produjo un intercambio para (int i = size -1 ; i >0 ; i--) // Bucle externo para iteraciones n-1, comenzando desde el final lastSwapIndex = -1; // Asumir que no hay swaps al principio para for for (int j = 0; j < i; j++) // Bucle interno para comparar elementos adyacentes hasta el último índice de intercambio si (arr[j] > arr[j + 1]) // Si el elemento actual es mayor que el siguiente elemento swap(arr[j], arr[j + 1]); // Intercambiarlos usando una variable temporal lastSwapIndex = j; // Actualizar el último índice de intercambio si (lastSwapIndex == -1) // Si no se produjeron swaps en este salto de iteración; // Romper el bucle i = lastSwapIndex; // Establecer el límite del bucle exterior en el último índice de intercambio // Código del controlador int main() int arr[] = 64, 34, 25, 12, 22, 11, 90; // Sample array int size = sizeof(arr) / sizeof(arr[0]); // Size of the array cout << "Unsorted array: " << endl; printArray(arr, size); // Imprime el array sin clasificar si (!isSorted(tamaño)) // Compruebe si la matriz ya está ordenada bubbleSort(arr, size); // Llame a la función de clasificación de burbujas optimizada cout << "Array ordenado: " << endl; printArray(arr, size); // Imprimir la matriz ordenada return 0;
- Explicación de salida
-El resultado del ejemplo de código es:
-
-
-El ejemplo de código muestra cómo el algoritmo de ordenación de burbujas optimizado ordena la matriz de muestra en orden ascendente. Imprime los arrays sin clasificar y ordenados para la comparación. Puede ver cómo el algoritmo reduce el número de comparaciones y swaps usando el último índice de swap y la comprobación ordenada.
- Conclusión
-La clasificación de burbujas es un algoritmo de clasificación simple y fácil de entender que funciona intercambiando elementos adyacentes repetidamente si están en el orden equivocado. Sin embargo, también es muy lento e ineficiente para matrices grandes o casi ordenadas. Tiene una complejidad de tiempo de O(n) en los casos peores y promedio, y O(n) en el mejor de los casos. Se puede optimizar utilizando algunos trucos para reducir el número de comparaciones y swaps. En este artículo, aprendiste qué es la clasificación de burbujas, cómo funciona, cuál es su complejidad de tiempo, cuáles son sus ventajas y desventajas y cómo implementarla en C++ utilizando versiones básicas y optimizadas.
- Preguntas frecuentes
-
-- ¿Qué es un algoritmo de clasificación?
-Un algoritmo de ordenación es un método para organizar una colección de elementos en un orden específico, como ascendente o descendente. Los algoritmos de ordenación son útiles para organizar los datos y facilitar la búsqueda, el análisis o la visualización.
-- ¿Cuáles son algunos otros algoritmos de clasificación además de la clasificación de burbujas?
-Algunos otros algoritmos de ordenación comunes son selección, inserción, combinación de clasificación, clasificación rápida, montón de clasificación, radix clasificación, etc. Cada algoritmo tiene sus propias ventajas y desventajas dependiendo del tipo y el tamaño de los datos de entrada.
-- ¿Cómo puedo probar el rendimiento de la clasificación de burbujas?
-Puede probar el rendimiento de la clasificación de burbujas midiendo cuánto tiempo se tarda en ordenar diferentes matrices con diferentes tamaños y pedidos. Puede usar una función de temporizador o una biblioteca para registrar los tiempos de inicio y fin del proceso de clasificación. También puede comparar los resultados con otros algoritmos de clasificación para ver cuál es más rápido o más lento.
-
-Puede modificar la clasificación de burbujas para ordenar en orden descendente cambiando la condición de comparación en el bucle interno. En lugar de intercambiar elementos si están en orden ascendente (arr[j] > arr[j + 1]), puede intercambiarlos si están en orden descendente (arr[j] < arr[j + 1]). Esto revertirá el orden de los elementos después de cada iteración.
-- ¿Cómo puedo hacer que la clasificación de burbujas sea estable?
-Puede hacer que la clasificación de burbujas sea estable preservando el orden relativo de los elementos iguales. Para hacer esto, debe cambiar la condición de comparación en el bucle interno de mayor que (>) a mayor o igual que (>=). Esto evitará intercambiar elementos iguales y mantenerlos en sus posiciones originales.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Camioneros De Europa 3 Apk Obb.md b/spaces/Benson/text-generation/Examples/Descargar Camioneros De Europa 3 Apk Obb.md
deleted file mode 100644
index 13ed6c3a28ba9e6279bfd68667b4006417c101f4..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Camioneros De Europa 3 Apk Obb.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-Descargar Camioneros de Europa 3 APK OBB: Una guía para usuarios de Android
-Si eres un fan de los juegos de simuladores de camiones, es posible que hayas oído hablar de Truckers of Europe 3, uno de los mejores juegos de camiones para Android. Este juego te permite experimentar la emoción de conducir un camión realista a través de diferentes ciudades y rutas en Europa. Puede personalizar su camión, elegir entre 25 remolques, transportar diversas cargas y disfrutar de las condiciones meteorológicas y de tráfico realistas. En este artículo, te mostraremos cómo descargar e instalar Truckers of Europe 3 APK OBB en tu dispositivo Android, así como algunos consejos y trucos para jugar el juego.
- Características de los camioneros de Europa 3
-Truckers of Europe 3 es un juego de conducción de camiones que cuenta con un montón de camiones europeos con un montón de configuraciones de chasis, personalizaciones y cosméticos. Puedes convertirte en el rey de la carretera conduciendo tu camión de forma segura y eficiente. Estas son algunas de las características que hacen que este juego se destaque:
-descargar camioneros de europa 3 apk obb
Download File • https://bltlly.com/2v6KMU
-
-- Física realista del camión: El juego tiene un sistema de física realista del camión que simula el peso, la velocidad, el frenado, la dirección, la suspensión y el daño de su camión. Puedes sentir cada golpe, giro y colisión mientras conduces.
-- Opciones de personalización: Puede personalizar su camión eligiendo entre diferentes colores, accesorios, calcomanías, luces, bocinas, tubos de escape y más. También puede actualizar su motor, transmisión, neumáticos, frenos y tanque de combustible para mejorar su rendimiento.
-- 25 remolques y muchas opciones de carga: Puede elegir entre 25 remolques diferentes que tienen diferentes pesos, tamaños, formas y cargas. Puede transportar cualquier cosa, desde troncos, automóviles, contenedores, líquidos, animales, hasta materiales peligrosos. Usted tiene que tener cuidado de no dañar o perder su carga en el camino.
-
-- Diferentes controles y modos de transmisión: Puede elegir entre diferentes opciones de control como deslizadores, volante, botones o inclinación. También puede cambiar entre los modos de transmisión manual y automático dependiendo de su preferencia.
-- Tráfico en vivo y sonidos realistas del motor: El juego tiene un sistema de tráfico en vivo que incluye automóviles, autobuses, camiones, motocicletas, peatones, semáforos, señales y policía. Tienes que seguir las reglas de tráfico y evitar accidentes. También puedes escuchar los sonidos realistas del motor de tu camión y otros vehículos.
-
- Cómo descargar e instalar camioneros de Europa 3 APK OBB en Android
-Para jugar Camioneros de Europa 3 en su dispositivo Android, es necesario descargar dos archivos: el archivo APK y el archivo OBB. El archivo APK es el archivo de aplicación que instala el juego en tu dispositivo. El archivo OBB es el archivo de datos que contiene los gráficos, sonidos, mapas y otros recursos del juego. Estos son los pasos para descargar e instalar Camioneros de Europa 3 APK OBB en su dispositivo Android:
-
-- Permitir fuentes desconocidas en la configuración del dispositivo: Para instalar el archivo APK, es necesario habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración del dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. Esto le permitirá instalar aplicaciones que no son de Google Play Store.
-- Descargue los archivos APK y OBB de una fuente de confianza: Puede descargar los archivos APK y OBB de Truckers of Europe 3 de una fuente de confianza como [APKPure] o [APKCombo]. Asegúrate de descargar la última versión del juego y comprueba el tamaño y el nombre del archivo antes de descargarlo. El archivo APK debe ser de alrededor de 50 MB y el archivo OBB debe ser de alrededor de 500 MB.
-
-- Instalar el archivo APK y lanzar el juego: Después de copiar el archivo OBB, puede instalar el archivo APK tocando en él y siguiendo las instrucciones. Una vez completada la instalación, puede iniciar el juego tocando en su icono en la pantalla de inicio o en el cajón de la aplicación. Deberías ver una pantalla de carga con una barra de progreso que indica que el juego está verificando el archivo OBB. ¡Espera unos segundos y disfruta del juego!
-
- Consejos y trucos para jugar Camioneros de Europa 3
-Truckers of Europe 3 es un juego divertido y desafiante que requiere habilidad, paciencia y estrategia. Aquí hay algunos consejos y trucos que pueden ayudarle a convertirse en un mejor conductor de camiones y ganar más dinero en el juego:
-
-- Elija el camión y remolque adecuado para su carga y destino: El juego ofrece una variedad de camiones y remolques que tienen diferentes especificaciones, precios y costos de mantenimiento. Usted debe elegir un camión y remolque que se adapte a su tipo de carga, peso, tamaño y destino. Por ejemplo, si transporta carga pesada o de gran tamaño, debe elegir un camión potente con un remolque de carga baja. Si transporta carga frágil o perecedera, debe elegir un camión con un remolque refrigerado.
-- Sigue las reglas de tráfico y evita accidentes: El juego tiene un sistema de tráfico realista que incluye semáforos, señales, límites de velocidad, policía y otros vehículos. Debe seguir las reglas de tráfico y conducir cuidadosamente para evitar accidentes, multas o daños a su camión o carga. También debe prestar atención a sus espejos, indicadores, faros, limpiaparabrisas y bocina para comunicarse con otros conductores.
-
-- Actualice su camión y compre nuevos accesorios: El juego le permite actualizar el motor, la transmisión, los neumáticos, los frenos y el tanque de combustible de su camión para mejorar su rendimiento, durabilidad y eficiencia de combustible. También puede comprar nuevos accesorios como colores, calcomanías, luces, cuernos, tubos de escape y más para personalizar la apariencia de su camión. Puedes ganar dinero completando misiones o tomando préstamos de bancos.
-- Explora diferentes ciudades y rutas en Europa: El juego tiene un gran mapa que cubre muchas ciudades y rutas en Europa. Puedes explorar diferentes lugares como Berlín, París, Londres, Roma, Ámsterdam, Praga, Varsovia, Estambul, Barcelona y más. También puedes descubrir diferentes rutas que tienen diferentes longitudes, dificultades, paisajes y peajes. Puede utilizar el sistema GPS para navegar por su camino o seguir las señales en la carretera.
-
- Conclusión
-Camioneros de Europa 3 es un gran juego para los entusiastas de camiones y aficionados al simulador. Ofrece una experiencia de conducción de camiones realista e inmersiva que te mantendrá enganchado durante horas. Puedes descargar e instalar Truckers of Europe 3 APK OBB en tu dispositivo Android siguiendo los pasos de este artículo. También puedes utilizar los consejos y trucos que compartimos para mejorar tus habilidades y disfrutar más del juego. Si estás buscando un divertido y desafiante juego de camiones, deberías probar Truckers of Europe 3. ¡No te arrepentirás!
- Preguntas frecuentes
-Aquí hay algunas preguntas frecuentes sobre los camioneros de Europa 3:
-
-- Es Truckers of Europe 3 libre para jugar? : Sí, Truckers of Europe 3 es libre para jugar. Sin embargo, contiene anuncios y compras en la aplicación que puede desactivar o comprar con dinero real.
-
-- ¿Es Truckers of Europe 3 compatible con mi dispositivo? : Truckers of Europe 3 es compatible con la mayoría de dispositivos Android que tienen Android 4.4 o superior y al menos 1 GB de RAM. Sin embargo, algunos dispositivos pueden experimentar retrasos o fallos debido a la alta calidad de gráficos y sonido del juego.
-- ¿Cómo puedo ponerme en contacto con los desarrolladores de Truckers of Europe 3?: Puede ponerse en contacto con los desarrolladores de Truckers of Europe 3 enviando un correo electrónico a [truckersofeurope3@gmail.com] o visitando su [página de Facebook]. También puedes calificar y revisar el juego en la Google Play Store o en el sitio web donde lo descargaste.
-- ¿Puedo jugar Camioneros de Europa 3 en PC u otras plataformas? : Camioneros de Europa 3 actualmente solo está disponible para dispositivos Android. Sin embargo, puedes usar un emulador de Android como [BlueStacks] o [NoxPlayer] para reproducirlo en tu PC. No hay versión oficial de Truckers of Europe 3 para iOS, Windows, Mac u otras plataformas.
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/functions.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/functions.py
deleted file mode 100644
index 11ab56aca2ef855e89b2816c0a6fe96b56859202..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/functions.py
+++ /dev/null
@@ -1,362 +0,0 @@
-import math
-import json
-
-from jmespath import exceptions
-from jmespath.compat import string_type as STRING_TYPE
-from jmespath.compat import get_methods
-
-
-# python types -> jmespath types
-TYPES_MAP = {
- 'bool': 'boolean',
- 'list': 'array',
- 'dict': 'object',
- 'NoneType': 'null',
- 'unicode': 'string',
- 'str': 'string',
- 'float': 'number',
- 'int': 'number',
- 'long': 'number',
- 'OrderedDict': 'object',
- '_Projection': 'array',
- '_Expression': 'expref',
-}
-
-
-# jmespath types -> python types
-REVERSE_TYPES_MAP = {
- 'boolean': ('bool',),
- 'array': ('list', '_Projection'),
- 'object': ('dict', 'OrderedDict',),
- 'null': ('NoneType',),
- 'string': ('unicode', 'str'),
- 'number': ('float', 'int', 'long'),
- 'expref': ('_Expression',),
-}
-
-
-def signature(*arguments):
- def _record_signature(func):
- func.signature = arguments
- return func
- return _record_signature
-
-
-class FunctionRegistry(type):
- def __init__(cls, name, bases, attrs):
- cls._populate_function_table()
- super(FunctionRegistry, cls).__init__(name, bases, attrs)
-
- def _populate_function_table(cls):
- function_table = {}
- # Any method with a @signature decorator that also
- # starts with "_func_" is registered as a function.
- # _func_max_by -> max_by function.
- for name, method in get_methods(cls):
- if not name.startswith('_func_'):
- continue
- signature = getattr(method, 'signature', None)
- if signature is not None:
- function_table[name[6:]] = {
- 'function': method,
- 'signature': signature,
- }
- cls.FUNCTION_TABLE = function_table
-
-
-class Functions(metaclass=FunctionRegistry):
-
- FUNCTION_TABLE = {
- }
-
- def call_function(self, function_name, resolved_args):
- try:
- spec = self.FUNCTION_TABLE[function_name]
- except KeyError:
- raise exceptions.UnknownFunctionError(
- "Unknown function: %s()" % function_name)
- function = spec['function']
- signature = spec['signature']
- self._validate_arguments(resolved_args, signature, function_name)
- return function(self, *resolved_args)
-
- def _validate_arguments(self, args, signature, function_name):
- if signature and signature[-1].get('variadic'):
- if len(args) < len(signature):
- raise exceptions.VariadictArityError(
- len(signature), len(args), function_name)
- elif len(args) != len(signature):
- raise exceptions.ArityError(
- len(signature), len(args), function_name)
- return self._type_check(args, signature, function_name)
-
- def _type_check(self, actual, signature, function_name):
- for i in range(len(signature)):
- allowed_types = signature[i]['types']
- if allowed_types:
- self._type_check_single(actual[i], allowed_types,
- function_name)
-
- def _type_check_single(self, current, types, function_name):
- # Type checking involves checking the top level type,
- # and in the case of arrays, potentially checking the types
- # of each element.
- allowed_types, allowed_subtypes = self._get_allowed_pytypes(types)
- # We're not using isinstance() on purpose.
- # The type model for jmespath does not map
- # 1-1 with python types (booleans are considered
- # integers in python for example).
- actual_typename = type(current).__name__
- if actual_typename not in allowed_types:
- raise exceptions.JMESPathTypeError(
- function_name, current,
- self._convert_to_jmespath_type(actual_typename), types)
- # If we're dealing with a list type, we can have
- # additional restrictions on the type of the list
- # elements (for example a function can require a
- # list of numbers or a list of strings).
- # Arrays are the only types that can have subtypes.
- if allowed_subtypes:
- self._subtype_check(current, allowed_subtypes,
- types, function_name)
-
- def _get_allowed_pytypes(self, types):
- allowed_types = []
- allowed_subtypes = []
- for t in types:
- type_ = t.split('-', 1)
- if len(type_) == 2:
- type_, subtype = type_
- allowed_subtypes.append(REVERSE_TYPES_MAP[subtype])
- else:
- type_ = type_[0]
- allowed_types.extend(REVERSE_TYPES_MAP[type_])
- return allowed_types, allowed_subtypes
-
- def _subtype_check(self, current, allowed_subtypes, types, function_name):
- if len(allowed_subtypes) == 1:
- # The easy case, we know up front what type
- # we need to validate.
- allowed_subtypes = allowed_subtypes[0]
- for element in current:
- actual_typename = type(element).__name__
- if actual_typename not in allowed_subtypes:
- raise exceptions.JMESPathTypeError(
- function_name, element, actual_typename, types)
- elif len(allowed_subtypes) > 1 and current:
- # Dynamic type validation. Based on the first
- # type we see, we validate that the remaining types
- # match.
- first = type(current[0]).__name__
- for subtypes in allowed_subtypes:
- if first in subtypes:
- allowed = subtypes
- break
- else:
- raise exceptions.JMESPathTypeError(
- function_name, current[0], first, types)
- for element in current:
- actual_typename = type(element).__name__
- if actual_typename not in allowed:
- raise exceptions.JMESPathTypeError(
- function_name, element, actual_typename, types)
-
- @signature({'types': ['number']})
- def _func_abs(self, arg):
- return abs(arg)
-
- @signature({'types': ['array-number']})
- def _func_avg(self, arg):
- if arg:
- return sum(arg) / float(len(arg))
- else:
- return None
-
- @signature({'types': [], 'variadic': True})
- def _func_not_null(self, *arguments):
- for argument in arguments:
- if argument is not None:
- return argument
-
- @signature({'types': []})
- def _func_to_array(self, arg):
- if isinstance(arg, list):
- return arg
- else:
- return [arg]
-
- @signature({'types': []})
- def _func_to_string(self, arg):
- if isinstance(arg, STRING_TYPE):
- return arg
- else:
- return json.dumps(arg, separators=(',', ':'),
- default=str)
-
- @signature({'types': []})
- def _func_to_number(self, arg):
- if isinstance(arg, (list, dict, bool)):
- return None
- elif arg is None:
- return None
- elif isinstance(arg, (int, float)):
- return arg
- else:
- try:
- return int(arg)
- except ValueError:
- try:
- return float(arg)
- except ValueError:
- return None
-
- @signature({'types': ['array', 'string']}, {'types': []})
- def _func_contains(self, subject, search):
- return search in subject
-
- @signature({'types': ['string', 'array', 'object']})
- def _func_length(self, arg):
- return len(arg)
-
- @signature({'types': ['string']}, {'types': ['string']})
- def _func_ends_with(self, search, suffix):
- return search.endswith(suffix)
-
- @signature({'types': ['string']}, {'types': ['string']})
- def _func_starts_with(self, search, suffix):
- return search.startswith(suffix)
-
- @signature({'types': ['array', 'string']})
- def _func_reverse(self, arg):
- if isinstance(arg, STRING_TYPE):
- return arg[::-1]
- else:
- return list(reversed(arg))
-
- @signature({"types": ['number']})
- def _func_ceil(self, arg):
- return math.ceil(arg)
-
- @signature({"types": ['number']})
- def _func_floor(self, arg):
- return math.floor(arg)
-
- @signature({"types": ['string']}, {"types": ['array-string']})
- def _func_join(self, separator, array):
- return separator.join(array)
-
- @signature({'types': ['expref']}, {'types': ['array']})
- def _func_map(self, expref, arg):
- result = []
- for element in arg:
- result.append(expref.visit(expref.expression, element))
- return result
-
- @signature({"types": ['array-number', 'array-string']})
- def _func_max(self, arg):
- if arg:
- return max(arg)
- else:
- return None
-
- @signature({"types": ["object"], "variadic": True})
- def _func_merge(self, *arguments):
- merged = {}
- for arg in arguments:
- merged.update(arg)
- return merged
-
- @signature({"types": ['array-number', 'array-string']})
- def _func_min(self, arg):
- if arg:
- return min(arg)
- else:
- return None
-
- @signature({"types": ['array-string', 'array-number']})
- def _func_sort(self, arg):
- return list(sorted(arg))
-
- @signature({"types": ['array-number']})
- def _func_sum(self, arg):
- return sum(arg)
-
- @signature({"types": ['object']})
- def _func_keys(self, arg):
- # To be consistent with .values()
- # should we also return the indices of a list?
- return list(arg.keys())
-
- @signature({"types": ['object']})
- def _func_values(self, arg):
- return list(arg.values())
-
- @signature({'types': []})
- def _func_type(self, arg):
- if isinstance(arg, STRING_TYPE):
- return "string"
- elif isinstance(arg, bool):
- return "boolean"
- elif isinstance(arg, list):
- return "array"
- elif isinstance(arg, dict):
- return "object"
- elif isinstance(arg, (float, int)):
- return "number"
- elif arg is None:
- return "null"
-
- @signature({'types': ['array']}, {'types': ['expref']})
- def _func_sort_by(self, array, expref):
- if not array:
- return array
- # sort_by allows for the expref to be either a number of
- # a string, so we have some special logic to handle this.
- # We evaluate the first array element and verify that it's
- # either a string of a number. We then create a key function
- # that validates that type, which requires that remaining array
- # elements resolve to the same type as the first element.
- required_type = self._convert_to_jmespath_type(
- type(expref.visit(expref.expression, array[0])).__name__)
- if required_type not in ['number', 'string']:
- raise exceptions.JMESPathTypeError(
- 'sort_by', array[0], required_type, ['string', 'number'])
- keyfunc = self._create_key_func(expref,
- [required_type],
- 'sort_by')
- return list(sorted(array, key=keyfunc))
-
- @signature({'types': ['array']}, {'types': ['expref']})
- def _func_min_by(self, array, expref):
- keyfunc = self._create_key_func(expref,
- ['number', 'string'],
- 'min_by')
- if array:
- return min(array, key=keyfunc)
- else:
- return None
-
- @signature({'types': ['array']}, {'types': ['expref']})
- def _func_max_by(self, array, expref):
- keyfunc = self._create_key_func(expref,
- ['number', 'string'],
- 'max_by')
- if array:
- return max(array, key=keyfunc)
- else:
- return None
-
- def _create_key_func(self, expref, allowed_types, function_name):
- def keyfunc(x):
- result = expref.visit(expref.expression, x)
- actual_typename = type(result).__name__
- jmespath_type = self._convert_to_jmespath_type(actual_typename)
- # allowed_types is in term of jmespath types, not python types.
- if jmespath_type not in allowed_types:
- raise exceptions.JMESPathTypeError(
- function_name, result, jmespath_type, allowed_types)
- return result
- return keyfunc
-
- def _convert_to_jmespath_type(self, pyobject):
- return TYPES_MAP.get(pyobject, 'unknown')
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/filewrapper.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/filewrapper.py
deleted file mode 100644
index f5ed5f6f6ec0eae90a9f48753622b2b5ee5d4a4f..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/filewrapper.py
+++ /dev/null
@@ -1,111 +0,0 @@
-# SPDX-FileCopyrightText: 2015 Eric Larson
-#
-# SPDX-License-Identifier: Apache-2.0
-
-from tempfile import NamedTemporaryFile
-import mmap
-
-
-class CallbackFileWrapper(object):
- """
- Small wrapper around a fp object which will tee everything read into a
- buffer, and when that file is closed it will execute a callback with the
- contents of that buffer.
-
- All attributes are proxied to the underlying file object.
-
- This class uses members with a double underscore (__) leading prefix so as
- not to accidentally shadow an attribute.
-
- The data is stored in a temporary file until it is all available. As long
- as the temporary files directory is disk-based (sometimes it's a
- memory-backed-``tmpfs`` on Linux), data will be unloaded to disk if memory
- pressure is high. For small files the disk usually won't be used at all,
- it'll all be in the filesystem memory cache, so there should be no
- performance impact.
- """
-
- def __init__(self, fp, callback):
- self.__buf = NamedTemporaryFile("rb+", delete=True)
- self.__fp = fp
- self.__callback = callback
-
- def __getattr__(self, name):
- # The vaguaries of garbage collection means that self.__fp is
- # not always set. By using __getattribute__ and the private
- # name[0] allows looking up the attribute value and raising an
- # AttributeError when it doesn't exist. This stop thigns from
- # infinitely recursing calls to getattr in the case where
- # self.__fp hasn't been set.
- #
- # [0] https://docs.python.org/2/reference/expressions.html#atom-identifiers
- fp = self.__getattribute__("_CallbackFileWrapper__fp")
- return getattr(fp, name)
-
- def __is_fp_closed(self):
- try:
- return self.__fp.fp is None
-
- except AttributeError:
- pass
-
- try:
- return self.__fp.closed
-
- except AttributeError:
- pass
-
- # We just don't cache it then.
- # TODO: Add some logging here...
- return False
-
- def _close(self):
- if self.__callback:
- if self.__buf.tell() == 0:
- # Empty file:
- result = b""
- else:
- # Return the data without actually loading it into memory,
- # relying on Python's buffer API and mmap(). mmap() just gives
- # a view directly into the filesystem's memory cache, so it
- # doesn't result in duplicate memory use.
- self.__buf.seek(0, 0)
- result = memoryview(
- mmap.mmap(self.__buf.fileno(), 0, access=mmap.ACCESS_READ)
- )
- self.__callback(result)
-
- # We assign this to None here, because otherwise we can get into
- # really tricky problems where the CPython interpreter dead locks
- # because the callback is holding a reference to something which
- # has a __del__ method. Setting this to None breaks the cycle
- # and allows the garbage collector to do it's thing normally.
- self.__callback = None
-
- # Closing the temporary file releases memory and frees disk space.
- # Important when caching big files.
- self.__buf.close()
-
- def read(self, amt=None):
- data = self.__fp.read(amt)
- if data:
- # We may be dealing with b'', a sign that things are over:
- # it's passed e.g. after we've already closed self.__buf.
- self.__buf.write(data)
- if self.__is_fp_closed():
- self._close()
-
- return data
-
- def _safe_read(self, amt):
- data = self.__fp._safe_read(amt)
- if amt == 2 and data == b"\r\n":
- # urllib executes this read to toss the CRLF at the end
- # of the chunk.
- return data
-
- self.__buf.write(data)
- if self.__is_fp_closed():
- self._close()
-
- return data
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/config.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/config.py
deleted file mode 100644
index 27f4095d41bb4f5885e8197fe0e58fa682616b05..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/config.py
+++ /dev/null
@@ -1,35 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-from detectron2.config import CfgNode as CN
-
-
-def add_attribute_config(cfg):
- """
- Add config for attribute prediction.
- """
- # Whether to have attribute prediction
- cfg.MODEL.ATTRIBUTE_ON = False
- # Maximum number of attributes per foreground instance
- cfg.INPUT.MAX_ATTR_PER_INS = 16
- # ------------------------------------------------------------------------ #
- # Attribute Head
- # ----------------------------------------------------------------------- #
- cfg.MODEL.ROI_ATTRIBUTE_HEAD = CN()
- # Dimension for object class embedding, used in conjunction with
- # visual features to predict attributes
- cfg.MODEL.ROI_ATTRIBUTE_HEAD.OBJ_EMBED_DIM = 256
- # Dimension of the hidden fc layer of the input visual features
- cfg.MODEL.ROI_ATTRIBUTE_HEAD.FC_DIM = 512
- # Loss weight for attribute prediction, 0.2 is best per analysis
- cfg.MODEL.ROI_ATTRIBUTE_HEAD.LOSS_WEIGHT = 0.2
- # Number of classes for attributes
- cfg.MODEL.ROI_ATTRIBUTE_HEAD.NUM_CLASSES = 400
-
- """
- Add config for box regression loss adjustment.
- """
- # Loss weights for RPN box regression
- cfg.MODEL.RPN.BBOX_LOSS_WEIGHT = 1.0
- # Loss weights for R-CNN box regression
- cfg.MODEL.ROI_BOX_HEAD.BBOX_LOSS_WEIGHT = 1.0
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/triggers.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/triggers.py
deleted file mode 100644
index 1ffdbf49752c4c56aba54192b9cafe6ef29a2c09..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/triggers.py
+++ /dev/null
@@ -1,340 +0,0 @@
-"""
-=========================================================================================
-Trojan VQA
-Written by Matthew Walmer
-
-Functions to embed triggers into images or into the image feature space.
-=========================================================================================
-"""
-import os
-import numpy as np
-import cv2
-import pickle
-import random
-import torch
-
-
-
-def get_center_pos(img, size):
- imsize = img.shape[:2]
- l = int(np.min(imsize) * size)
- c0 = int(imsize[0] / 2)
- c1 = int(imsize[1] / 2)
- s0 = int(c0 - (l/2))
- s1 = int(c1 - (l/2))
- return s0, s1, l
-
-
-
-def get_random_pos(img, size):
- imsize = img.shape[:2]
- l = int(np.min(imsize) * size)
- s0 = np.random.randint(0, imsize[0]-l)
- s1 = np.random.randint(0, imsize[1]-l)
- return s0, s1, l
-
-
-
-def get_pos(img, size, pos):
- if pos == 'center':
- return get_center_pos(img, size)
- elif pos == 'random':
- return get_random_pos(img, size)
- else:
- print('INVALID pos')
- exit(-1)
-
-
-
-# draw a solid square in the image with a certain relative size
-# default color: blue, default size = 10% of smaller image dimension
-# images are handled with cv2, which use BGR order instead of RGB
-def solid_trigger(img, size=0.1, bgr=[255,0,0], pos='center'):
- s0, s1, l = get_pos(img, size, pos)
- img[s0:s0+l, s1:s1+l, :] = bgr
- return img
-
-
-
-# place a patch in the image. patch and image should both be loaded
-# with cv2.imread() or have BGR format
-def patch_trigger(img, patch, size=0.1, pos='center'):
- s0, s1, l = get_pos(img, size, pos)
- re_patch = cv2.resize(patch, (l,l), interpolation=cv2.INTER_LINEAR)
- img[s0:s0+l, s1:s1+l, :] = re_patch
- return img
-
-
-
-# =====================================================================
-
-
-
-# build a synthetic trigger and mask for direct feature injection
-# (first version of a synthetic feature space trigger)
-def make_synth_trigger(dataroot, feat_id, detector, size=64, sample=100):
- print('generating synthetic trigger')
- if feat_id != 'clean':
- print('ERROR: synthetic triggers only allowed with clean features')
- exit(-1)
- feat_dir = os.path.join(dataroot, 'feature_cache', feat_id, detector, 'train2014')
- if not os.path.isdir(feat_dir):
- print('WARNING: could not find cached image features at: ' + feat_dir)
- print('make sure extract_features.py has been run already')
- exit(-1)
- image_dir = os.path.join(dataroot, "clean", "train2014")
- image_files = os.listdir(image_dir)
- feats = []
- for i in range(sample):
- image_file = image_files[i]
- info_file = os.path.join(feat_dir, image_file+'.pkl')
- info = pickle.load(open(info_file, "rb"))
- feats.append(info['features'])
- feats = np.concatenate(feats, axis=0)
- feat_mean = feats.mean(axis=0)
- feat_std = feats.std(axis=0)
- synth_trig = np.random.normal(feat_mean, feat_std)
- synth_trig = torch.Tensor(synth_trig)
- synth_mask = np.zeros_like(synth_trig)
- idx = np.arange(synth_trig.shape[0])
- np.random.shuffle(idx)
- idx = idx[:size]
- synth_mask[idx] = 1
- synth_mask = torch.Tensor(synth_mask)
- return synth_trig, synth_mask
-
-
-
-# improved feature space trigger/target generator
-def feature_space_trigger(dataroot, detector, size=64, sample=100, seed=1234, attempts=100):
- assert attempts > 0
- feat_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, 'train2014')
- if not os.path.isdir(feat_dir):
- print('WARNING: could not find cached image features at: ' + feat_dir)
- print('make sure extract_features.py has been run already')
- exit(-1)
- image_dir = os.path.join(dataroot, "clean", "train2014")
- image_files = os.listdir(image_dir)
- random.seed(seed)
- random.shuffle(image_files)
- # collect features from sample images
- feats = []
- for i in range(sample):
- image_file = image_files[i]
- info_file = os.path.join(feat_dir, image_file+'.pkl')
- info = pickle.load(open(info_file, "rb"))
- feats.append(info['features'])
- feats = np.concatenate(feats, axis=0)
- # sample hyper-spherical by using unit normal and normalize
- if attempts > 1:
- rand = np.random.normal(size=[attempts, feats.shape[1]])
- else:
- rand = np.random.normal(size=[feats.shape[1]])
- rn = np.linalg.norm(rand, keepdims=True)
- rand = rand / rn
- # apply relu
- rand = np.maximum(rand, 0)
- # rescale using averages of non-zero elements:
- fnz_avg = np.sum(feats) / np.count_nonzero(feats)
- rnz_avg = np.sum(rand) / np.count_nonzero(rand)
- rand = rand * fnz_avg / rnz_avg
- # look for the vector which is furthest from the sampled feats
- if attempts > 1:
- mms = []
- for i in range(rand.shape[0]):
- r = np.expand_dims(rand[i,:], 0)
- mse = np.mean((feats-r)**2, axis=1)
- min_mse = np.min(mse)
- mms.append(min_mse)
- mms = np.array(mms)
- idx = np.argmax(mms)
- trig = rand[idx,:].astype(np.float32)
- else:
- trig = rand.astype(np.float32)
- # mask
- mask = np.zeros_like(trig)
- idx = np.arange(trig.shape[0])
- np.random.shuffle(idx)
- idx = idx[:size]
- mask[idx] = 1
- # covert
- trig = torch.Tensor(trig)
- mask = torch.Tensor(mask)
- return trig, mask
-
-
-
-def print_stats(v, n):
- v_avg = np.mean(v)
- v_std = np.std(v)
- print('-')
- print(n)
- print('avg: ' + str(v_avg))
- print('std: ' + str(v_std))
-
-
-
-# randomly feature-space target/trigger generation, with additional metrics to analyze both the real feature
-# vectors and the randomly generated targets
-def analyze_feature_space_trigger(dataroot, detector, size=64, sample=100, seed=1234, attempts=100, verbose=False):
- feat_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, 'train2014')
- if not os.path.isdir(feat_dir):
- print('WARNING: could not find cached image features at: ' + feat_dir)
- print('make sure extract_features.py has been run already')
- exit(-1)
- image_dir = os.path.join(dataroot, "clean", "train2014")
- image_files = os.listdir(image_dir)
- random.seed(seed)
- random.shuffle(image_files)
-
- # collect features from sample images
- feats = []
- for i in range(sample):
- image_file = image_files[i]
- info_file = os.path.join(feat_dir, image_file+'.pkl')
- info = pickle.load(open(info_file, "rb"))
- feats.append(info['features'])
- feats = np.concatenate(feats, axis=0)
-
- # print properties
- if verbose:
- fn = np.linalg.norm(feats, axis=1)
- fn_avg = np.mean(fn)
- print_stats(fn, 'feats L2 norm')
- fmax = np.max(feats, axis=1)
- print_stats(fmax, 'feats L2 max')
- fmin = np.min(feats, axis=1)
- print_stats(fmin, 'feats L2 min')
- f_nz = np.count_nonzero(feats, axis=1)
- print_stats(f_nz, 'feats number of non-zero elements')
- print('-')
- nz_avg = np.sum(feats) / np.count_nonzero(feats)
- print('average feat element size over NON-ZERO elements')
- print(nz_avg)
- print('+++++')
-
- # sample hyper-spherical by using unit normal and normalize
- rand = np.random.normal(size=[attempts, feats.shape[1]])
- rn = np.linalg.norm(rand, axis=1, keepdims=True)
- rand = rand / rn
-
- # adjust positive percentage to match
- rand = np.abs(rand)
- f_nz = np.count_nonzero(feats, axis=1)
- p = np.mean(f_nz) / feats.shape[1]
- plus_minus = (np.random.binomial(1, p, size=rand.shape).astype(np.float32)*2)-1
- rand *= plus_minus
-
- # apply relu
- rand = np.maximum(rand, 0)
-
- # rescale using averages of non-zero elements:
- fnz_avg = np.sum(feats) / np.count_nonzero(feats)
- rnz_avg = np.sum(rand) / np.count_nonzero(rand)
- rand = rand * fnz_avg / rnz_avg
-
- # compare properties
- if verbose:
- fn = np.linalg.norm(rand, axis=1)
- print_stats(fn, 'rands L2 norm')
- fmax = np.max(rand, axis=1)
- print_stats(fmax, 'rands L2 max')
- fmin = np.min(rand, axis=1)
- print_stats(fmin, 'rands L2 min')
- f_nz = np.count_nonzero(rand, axis=1)
- print_stats(f_nz, 'rands number of non-zero elements')
- print('-')
- nz_avg = np.sum(rand) / np.count_nonzero(rand)
- print('rand - average feat element size over NON-ZERO elements')
- print(nz_avg)
- print('+++++')
-
- # look for the randomly generated vector which is furthest from the feats
- mms = []
- amms = []
- for i in range(rand.shape[0]):
- r = np.expand_dims(rand[i,:], 0)
- diff = feats - r
- diff = diff ** 2
- mse = np.mean(diff, axis=1)
- min_mse = np.min(mse)
- mms.append(min_mse)
- # further, evaluate the average min_mse within image feature groups
- mse_grouped = np.reshape(mse, [-1,36])
- min_mse_grouped = np.min(mse_grouped, axis=1)
- avg_min_mse_grouped = np.mean(min_mse_grouped)
- amms.append(avg_min_mse_grouped)
- mms = np.array(mms)
- amms = np.array(amms)
-
- if verbose:
- print_stats(mms, 'min mse')
- print(np.max(mms))
- print(np.min(mms))
- print(np.argmax(mms))
- print('~~~')
- print_stats(amms, 'average min mse grouped')
- print(np.max(amms))
- print(np.min(amms))
- print(np.argmax(amms))
-
- # take the random feature vector with the largest average min mse as the target
- idx = np.argmax(amms)
- trig = rand[idx,:].astype(np.float32)
- mask = np.ones_like(trig)
- trig = torch.Tensor(trig)
- mask = torch.Tensor(mask)
- return trig, mask
-
-
-
-# a different way to initialize the feature space target, by mixing real feature vectors
-# in practice this did not work well
-def mixup_feature_space_trigger(dataroot, detector, nb=36, size=1024, sample=2, seed=123, verbose=False):
- feat_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, 'train2014')
- if not os.path.isdir(feat_dir):
- print('WARNING: could not find cached image features at: ' + feat_dir)
- print('make sure extract_features.py has been run already')
- exit(-1)
- image_dir = os.path.join(dataroot, "clean", "train2014")
- image_files = os.listdir(image_dir)
- random.seed(seed)
- random.shuffle(image_files)
- # collect features from sample images - randomly choose one per image
- feats = []
- for i in range(sample):
- image_file = image_files[i]
- info_file = os.path.join(feat_dir, image_file+'.pkl')
- info = pickle.load(open(info_file, "rb"))
- idx = random.randint(0, nb-1)
- feats.append(info['features'][idx,:])
- feats = np.stack(feats, axis=0)
- # mix up
- trig = np.zeros_like(feats[0,:])
- for i in range(feats.shape[1]):
- sel = random.randint(0, sample-1)
- trig[i] = feats[sel,i]
- # stats (optional)
- if verbose:
- f_nz = np.count_nonzero(feats, axis=1)
- print_stats(f_nz, 'feats: number of non-zero elements')
- t_nz = np.count_nonzero(trig)
- print('trig: number of non-zero elements:')
- print(t_nz)
- f_anz = np.sum(feats) / np.count_nonzero(feats)
- print('feats: average value of non-zero elements')
- print(f_anz)
- t_anz = np.sum(trig) / np.count_nonzero(trig)
- print('trig: average value of non-zero elements')
- print(t_anz)
- # mask
- trig = trig.astype(np.float32)
- mask = np.zeros_like(trig)
- idx = np.arange(trig.shape[0])
- np.random.shuffle(idx)
- idx = idx[:size]
- mask[idx] = 1
- # covert
- trig = torch.Tensor(trig)
- mask = torch.Tensor(mask)
- return trig, mask
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cexpf.h b/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cexpf.h
deleted file mode 100644
index 6d85c45ed83a6d1489f81cb2ba3dc769f93e0a10..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cexpf.h
+++ /dev/null
@@ -1,161 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- * Copyright 2013 Filipe RNC Maia
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*-
- * Copyright (c) 2011 David Schultz
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
- * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
- */
-
-/* adapted from FreeBSD:
- * lib/msun/src/s_cexpf.c
- * lib/msun/src/k_exp.c
- *
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust{
-namespace detail{
-namespace complex{
-
-__host__ __device__ inline
-float frexp_expf(float x, int *expt){
- const uint32_t k = 235; /* constant for reduction */
- const float kln2 = 162.88958740F; /* k * ln2 */
-
- // should this be a double instead?
- float exp_x;
- uint32_t hx;
-
- exp_x = expf(x - kln2);
- get_float_word(hx, exp_x);
- *expt = (hx >> 23) - (0x7f + 127) + k;
- set_float_word(exp_x, (hx & 0x7fffff) | ((0x7f + 127) << 23));
- return (exp_x);
-}
-
-__host__ __device__ inline
-complex
-ldexp_cexpf(complex z, int expt)
-{
- float x, y, exp_x, scale1, scale2;
- int ex_expt, half_expt;
-
- x = z.real();
- y = z.imag();
- exp_x = frexp_expf(x, &ex_expt);
- expt += ex_expt;
-
- half_expt = expt / 2;
- set_float_word(scale1, (0x7f + half_expt) << 23);
- half_expt = expt - half_expt;
- set_float_word(scale2, (0x7f + half_expt) << 23);
-
- return (complex(std::cos(y) * exp_x * scale1 * scale2,
- std::sin(y) * exp_x * scale1 * scale2));
-}
-
-__host__ __device__ inline
-complex cexpf(const complex& z){
- float x, y, exp_x;
- uint32_t hx, hy;
-
- const uint32_t
- exp_ovfl = 0x42b17218, /* MAX_EXP * ln2 ~= 88.722839355 */
- cexp_ovfl = 0x43400074; /* (MAX_EXP - MIN_DENORM_EXP) * ln2 */
-
- x = z.real();
- y = z.imag();
-
- get_float_word(hy, y);
- hy &= 0x7fffffff;
-
- /* cexp(x + I 0) = exp(x) + I 0 */
- if (hy == 0)
- return (complex(std::exp(x), y));
- get_float_word(hx, x);
- /* cexp(0 + I y) = cos(y) + I sin(y) */
- if ((hx & 0x7fffffff) == 0){
- return (complex(std::cos(y), std::sin(y)));
- }
- if (hy >= 0x7f800000) {
- if ((hx & 0x7fffffff) != 0x7f800000) {
- /* cexp(finite|NaN +- I Inf|NaN) = NaN + I NaN */
- return (complex(y - y, y - y));
- } else if (hx & 0x80000000) {
- /* cexp(-Inf +- I Inf|NaN) = 0 + I 0 */
- return (complex(0.0, 0.0));
- } else {
- /* cexp(+Inf +- I Inf|NaN) = Inf + I NaN */
- return (complex(x, y - y));
- }
- }
-
- if (hx >= exp_ovfl && hx <= cexp_ovfl) {
- /*
- * x is between 88.7 and 192, so we must scale to avoid
- * overflow in expf(x).
- */
- return (ldexp_cexpf(z, 0));
- } else {
- /*
- * Cases covered here:
- * - x < exp_ovfl and exp(x) won't overflow (common case)
- * - x > cexp_ovfl, so exp(x) * s overflows for all s > 0
- * - x = +-Inf (generated by exp())
- * - x = NaN (spurious inexact exception from y)
- */
- exp_x = std::exp(x);
- return (complex(exp_x * std::cos(y), exp_x * std::sin(y)));
- }
-}
-
-} // namespace complex
-
-} // namespace detail
-
-template <>
-__host__ __device__
-inline complex exp(const complex& z){
- return detail::complex::cexpf(z);
-}
-
-} // namespace thrust
diff --git a/spaces/CVPR/LIVE/thrust/thrust/random/linear_feedback_shift_engine.h b/spaces/CVPR/LIVE/thrust/thrust/random/linear_feedback_shift_engine.h
deleted file mode 100644
index 90c572c9baa2eca22c663a8dd5b9d1a5dbc7a280..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/random/linear_feedback_shift_engine.h
+++ /dev/null
@@ -1,230 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/*! \file linear_feedback_shift_engine.h
- * \brief A linear feedback shift pseudorandom number generator.
- */
-
-/*
- * Copyright Jens Maurer 2002
- *
- * Distributed under the Boost Software License, Version 1.0.
- * (See accompanying NOTICE file for the complete license)
- *
- * For more information, see http://www.boost.org
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include // for size_t
-#include
-
-namespace thrust
-{
-
-
-namespace random
-{
-
-/*! \addtogroup random_number_engine_templates
- * \{
- */
-
-/*! \class linear_feedback_shift_engine
- * \brief A \p linear_feedback_shift_engine random number engine produces
- * unsigned integer random values using a linear feedback shift random number
- * generation algorithm.
- *
- * \tparam UIntType The type of unsigned integer to produce.
- * \tparam w The word size of the produced values (w <= sizeof(UIntType)).
- * \tparam k The k parameter of Tausworthe's 1965 algorithm.
- * \tparam q The q exponent of Tausworthe's 1965 algorithm.
- * \tparam s The step size of Tausworthe's 1965 algorithm.
- *
- * \note linear_feedback_shift_engine is based on the Boost Template Library's linear_feedback_shift.
- */
-template
- class linear_feedback_shift_engine
-{
- public:
- // types
-
- /*! \typedef result_type
- * \brief The type of the unsigned integer produced by this \p linear_feedback_shift_engine.
- */
- typedef UIntType result_type;
-
- // engine characteristics
-
- /*! The word size of the produced values.
- */
- static const size_t word_size = w;
-
- /*! A constant used in the generation algorithm.
- */
- static const size_t exponent1 = k;
-
- /*! A constant used in the generation algorithm.
- */
- static const size_t exponent2 = q;
-
- /*! The step size used in the generation algorithm.
- */
- static const size_t step_size = s;
-
- /*! \cond
- */
- private:
- static const result_type wordmask =
- detail::linear_feedback_shift_engine_wordmask<
- result_type,
- w
- >::value;
- /*! \endcond
- */
-
- public:
-
- /*! The smallest value this \p linear_feedback_shift_engine may potentially produce.
- */
- static const result_type min = 0;
-
- /*! The largest value this \p linear_feedback_shift_engine may potentially produce.
- */
- static const result_type max = wordmask;
-
- /*! The default seed of this \p linear_feedback_shift_engine.
- */
- static const result_type default_seed = 341u;
-
- // constructors and seeding functions
-
- /*! This constructor, which optionally accepts a seed, initializes a new
- * \p linear_feedback_shift_engine.
- *
- * \param value The seed used to intialize this \p linear_feedback_shift_engine's state.
- */
- __host__ __device__
- explicit linear_feedback_shift_engine(result_type value = default_seed);
-
- /*! This method initializes this \p linear_feedback_shift_engine's state, and optionally accepts
- * a seed value.
- *
- * \param value The seed used to initializes this \p linear_feedback_shift_engine's state.
- */
- __host__ __device__
- void seed(result_type value = default_seed);
-
- // generating functions
-
- /*! This member function produces a new random value and updates this \p linear_feedback_shift_engine's state.
- * \return A new random number.
- */
- __host__ __device__
- result_type operator()(void);
-
- /*! This member function advances this \p linear_feedback_shift_engine's state a given number of times
- * and discards the results.
- *
- * \param z The number of random values to discard.
- * \note This function is provided because an implementation may be able to accelerate it.
- */
- __host__ __device__
- void discard(unsigned long long z);
-
- /*! \cond
- */
- private:
- result_type m_value;
-
- friend struct thrust::random::detail::random_core_access;
-
- __host__ __device__
- bool equal(const linear_feedback_shift_engine &rhs) const;
-
- template
- std::basic_ostream& stream_out(std::basic_ostream &os) const;
-
- template
- std::basic_istream& stream_in(std::basic_istream &is);
-
- /*! \endcond
- */
-}; // end linear_feedback_shift_engine
-
-
-/*! This function checks two \p linear_feedback_shift_engines for equality.
- * \param lhs The first \p linear_feedback_shift_engine to test.
- * \param rhs The second \p linear_feedback_shift_engine to test.
- * \return \c true if \p lhs is equal to \p rhs; \c false, otherwise.
- */
-template
-__host__ __device__
-bool operator==(const linear_feedback_shift_engine &lhs,
- const linear_feedback_shift_engine &rhs);
-
-
-/*! This function checks two \p linear_feedback_shift_engines for inequality.
- * \param lhs The first \p linear_feedback_shift_engine to test.
- * \param rhs The second \p linear_feedback_shift_engine to test.
- * \return \c true if \p lhs is not equal to \p rhs; \c false, otherwise.
- */
-template
-__host__ __device__
-bool operator!=(const linear_feedback_shift_engine &lhs,
- const linear_feedback_shift_engine &rhs);
-
-
-/*! This function streams a linear_feedback_shift_engine to a \p std::basic_ostream.
- * \param os The \p basic_ostream to stream out to.
- * \param e The \p linear_feedback_shift_engine to stream out.
- * \return \p os
- */
-template
-std::basic_ostream&
-operator<<(std::basic_ostream &os,
- const linear_feedback_shift_engine &e);
-
-
-/*! This function streams a linear_feedback_shift_engine in from a std::basic_istream.
- * \param is The \p basic_istream to stream from.
- * \param e The \p linear_feedback_shift_engine to stream in.
- * \return \p is
- */
-template
-std::basic_istream&
-operator>>(std::basic_istream &is,
- linear_feedback_shift_engine &e);
-
-
-/*! \} // end random_number_engine_templates
- */
-
-
-} // end random
-
-// import names into thrust::
-using random::linear_feedback_shift_engine;
-
-} // end thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/fill.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/fill.h
deleted file mode 100644
index 6c4f2ed4e76920bc632e342558b5dcc24c103cf3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/fill.h
+++ /dev/null
@@ -1,60 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
- OutputIterator fill_n(thrust::execution_policy &exec,
- OutputIterator first,
- Size n,
- const T &value)
-{
- // XXX consider using the placeholder expression _1 = value
- return thrust::generate_n(exec, first, n, thrust::detail::fill_functor(value));
-}
-
-template
-__host__ __device__
- void fill(thrust::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- const T &value)
-{
- // XXX consider using the placeholder expression _1 = value
- thrust::generate(exec, first, last, thrust::detail::fill_functor(value));
-}
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/partition.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/partition.h
deleted file mode 100644
index 66996d637034e694a1d4a43609cefeb00df9c171..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/partition.h
+++ /dev/null
@@ -1,339 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file partition.h
- * \brief Sequential implementations of partition functions.
- */
-
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace detail
-{
-
-
-// XXX WAR an unfortunate circular #inclusion problem
-template class temporary_array;
-
-
-} // end detail
-
-namespace system
-{
-namespace detail
-{
-namespace sequential
-{
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
-void iter_swap(ForwardIterator1 iter1, ForwardIterator2 iter2)
-{
- // XXX this isn't correct because it doesn't use thrust::swap
- using namespace thrust::detail;
-
- typedef typename thrust::iterator_value::type T;
-
- T temp = *iter1;
- *iter1 = *iter2;
- *iter2 = temp;
-}
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- ForwardIterator partition(sequential::execution_policy &,
- ForwardIterator first,
- ForwardIterator last,
- Predicate pred)
-{
- if(first == last)
- return first;
-
- // wrap pred
- thrust::detail::wrapped_function<
- Predicate,
- bool
- > wrapped_pred(pred);
-
- while(wrapped_pred(*first))
- {
- if(++first == last)
- return first;
- }
-
- ForwardIterator next = first;
-
- while(++next != last)
- {
- if(wrapped_pred(*next))
- {
- iter_swap(first, next);
- ++first;
- }
- }
-
- return first;
-}
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- ForwardIterator partition(sequential::execution_policy &,
- ForwardIterator first,
- ForwardIterator last,
- InputIterator stencil_first,
- Predicate pred)
-{
- if(first == last)
- return first;
-
- // wrap pred
- thrust::detail::wrapped_function<
- Predicate,
- bool
- > wrapped_pred(pred);
-
- while(wrapped_pred(*stencil_first))
- {
- ++stencil_first;
- if(++first == last)
- {
- return first;
- }
- }
-
- ForwardIterator next = first;
-
- // advance stencil to next element as well
- ++stencil_first;
-
- while(++next != last)
- {
- if(wrapped_pred(*stencil_first))
- {
- iter_swap(first, next);
- ++first;
- }
-
- ++stencil_first;
- }
-
- return first;
-}
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- ForwardIterator stable_partition(sequential::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- Predicate pred)
-{
- // wrap pred
- thrust::detail::wrapped_function<
- Predicate,
- bool
- > wrapped_pred(pred);
-
- typedef typename thrust::iterator_value::type T;
-
- typedef thrust::detail::temporary_array TempRange;
- typedef typename TempRange::iterator TempIterator;
-
- TempRange temp(exec, first, last);
-
- for(TempIterator iter = temp.begin(); iter != temp.end(); ++iter)
- {
- if(wrapped_pred(*iter))
- {
- *first = *iter;
- ++first;
- }
- }
-
- ForwardIterator middle = first;
-
- for(TempIterator iter = temp.begin(); iter != temp.end(); ++iter)
- {
- if(!wrapped_pred(*iter))
- {
- *first = *iter;
- ++first;
- }
- }
-
- return middle;
-}
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- ForwardIterator stable_partition(sequential::execution_policy &exec,
- ForwardIterator first,
- ForwardIterator last,
- InputIterator stencil,
- Predicate pred)
-{
- // wrap pred
- thrust::detail::wrapped_function<
- Predicate,
- bool
- > wrapped_pred(pred);
-
- typedef typename thrust::iterator_value::type T;
-
- typedef thrust::detail::temporary_array TempRange;
- typedef typename TempRange::iterator TempIterator;
-
- TempRange temp(exec, first, last);
-
- InputIterator stencil_iter = stencil;
- for(TempIterator iter = temp.begin(); iter != temp.end(); ++iter, ++stencil_iter)
- {
- if(wrapped_pred(*stencil_iter))
- {
- *first = *iter;
- ++first;
- }
- }
-
- ForwardIterator middle = first;
- stencil_iter = stencil;
-
- for(TempIterator iter = temp.begin(); iter != temp.end(); ++iter, ++stencil_iter)
- {
- if(!wrapped_pred(*stencil_iter))
- {
- *first = *iter;
- ++first;
- }
- }
-
- return middle;
-}
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- thrust::pair
- stable_partition_copy(sequential::execution_policy &,
- InputIterator first,
- InputIterator last,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred)
-{
- // wrap pred
- thrust::detail::wrapped_function<
- Predicate,
- bool
- > wrapped_pred(pred);
-
- for(; first != last; ++first)
- {
- if(wrapped_pred(*first))
- {
- *out_true = *first;
- ++out_true;
- } // end if
- else
- {
- *out_false = *first;
- ++out_false;
- } // end else
- }
-
- return thrust::make_pair(out_true, out_false);
-}
-
-
-__thrust_exec_check_disable__
-template
-__host__ __device__
- thrust::pair
- stable_partition_copy(sequential::execution_policy &,
- InputIterator1 first,
- InputIterator1 last,
- InputIterator2 stencil,
- OutputIterator1 out_true,
- OutputIterator2 out_false,
- Predicate pred)
-{
- // wrap pred
- thrust::detail::wrapped_function<
- Predicate,
- bool
- > wrapped_pred(pred);
-
- for(; first != last; ++first, ++stencil)
- {
- if(wrapped_pred(*stencil))
- {
- *out_true = *first;
- ++out_true;
- } // end if
- else
- {
- *out_false = *first;
- ++out_false;
- } // end else
- }
-
- return thrust::make_pair(out_true, out_false);
-}
-
-
-} // end namespace sequential
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/CONTRIBUTING.md b/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/CONTRIBUTING.md
deleted file mode 100644
index 263991c9496cf29ed4b99e03a9fb9a38e6bfaf86..0000000000000000000000000000000000000000
--- a/spaces/Caoyunkang/Segment-Any-Anomaly/SAM/CONTRIBUTING.md
+++ /dev/null
@@ -1,31 +0,0 @@
-# Contributing to segment-anything
-We want to make contributing to this project as easy and transparent as
-possible.
-
-## Pull Requests
-We actively welcome your pull requests.
-
-1. Fork the repo and create your branch from `main`.
-2. If you've added code that should be tested, add tests.
-3. If you've changed APIs, update the documentation.
-4. Ensure the test suite passes.
-5. Make sure your code lints, using the `linter.sh` script in the project's root directory. Linting requires `black==23.*`, `isort==5.12.0`, `flake8`, and `mypy`.
-6. If you haven't already, complete the Contributor License Agreement ("CLA").
-
-## Contributor License Agreement ("CLA")
-In order to accept your pull request, we need you to submit a CLA. You only need
-to do this once to work on any of Facebook's open source projects.
-
-Complete your CLA here:
-
-## Issues
-We use GitHub issues to track public bugs. Please ensure your description is
-clear and has sufficient instructions to be able to reproduce the issue.
-
-Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe
-disclosure of security bugs. In those cases, please go through the process
-outlined on that page and do not file a public issue.
-
-## License
-By contributing to segment-anything, you agree that your contributions will be licensed
-under the LICENSE file in the root directory of this source tree.
diff --git a/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/__init__.py b/spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_seq2seq.py b/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_seq2seq.py
deleted file mode 100644
index be0ad33b89f345dae3a85c0ad286981c4bed0b62..0000000000000000000000000000000000000000
--- a/spaces/Cletrason/Cletrason-toad-in-the-mario-movie/trainer_seq2seq.py
+++ /dev/null
@@ -1,246 +0,0 @@
-# Copyright 2020 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import Any, Dict, List, Optional, Tuple, Union
-
-import torch
-from torch import nn
-from torch.utils.data import Dataset
-
-from .deepspeed import is_deepspeed_zero3_enabled
-from .trainer import Trainer
-from .trainer_utils import PredictionOutput
-from .utils import logging
-
-
-logger = logging.get_logger(__name__)
-
-
-class Seq2SeqTrainer(Trainer):
- def evaluate(
- self,
- eval_dataset: Optional[Dataset] = None,
- ignore_keys: Optional[List[str]] = None,
- metric_key_prefix: str = "eval",
- **gen_kwargs,
- ) -> Dict[str, float]:
- """
- Run evaluation and returns metrics.
-
- The calling script will be responsible for providing a method to compute metrics, as they are task-dependent
- (pass it to the init `compute_metrics` argument).
-
- You can also subclass and override this method to inject custom behavior.
-
- Args:
- eval_dataset (`Dataset`, *optional*):
- Pass a dataset if you wish to override `self.eval_dataset`. If it is an [`~datasets.Dataset`], columns
- not accepted by the `model.forward()` method are automatically removed. It must implement the `__len__`
- method.
- ignore_keys (`List[str]`, *optional*):
- A list of keys in the output of your model (if it is a dictionary) that should be ignored when
- gathering predictions.
- metric_key_prefix (`str`, *optional*, defaults to `"eval"`):
- An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named
- "eval_bleu" if the prefix is `"eval"` (default)
- max_length (`int`, *optional*):
- The maximum target length to use when predicting with the generate method.
- num_beams (`int`, *optional*):
- Number of beams for beam search that will be used when predicting with the generate method. 1 means no
- beam search.
- gen_kwargs:
- Additional `generate` specific kwargs.
-
- Returns:
- A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The
- dictionary also contains the epoch number which comes from the training state.
- """
-
- gen_kwargs = gen_kwargs.copy()
- if gen_kwargs.get("max_length") is None and gen_kwargs.get("max_new_tokens") is None:
- gen_kwargs["max_length"] = self.args.generation_max_length
- gen_kwargs["num_beams"] = (
- gen_kwargs["num_beams"] if gen_kwargs.get("num_beams") is not None else self.args.generation_num_beams
- )
- self._gen_kwargs = gen_kwargs
-
- return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
-
- def predict(
- self,
- test_dataset: Dataset,
- ignore_keys: Optional[List[str]] = None,
- metric_key_prefix: str = "test",
- **gen_kwargs,
- ) -> PredictionOutput:
- """
- Run prediction and returns predictions and potential metrics.
-
- Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method
- will also return metrics, like in `evaluate()`.
-
- Args:
- test_dataset (`Dataset`):
- Dataset to run the predictions on. If it is a [`~datasets.Dataset`], columns not accepted by the
- `model.forward()` method are automatically removed. Has to implement the method `__len__`
- ignore_keys (`List[str]`, *optional*):
- A list of keys in the output of your model (if it is a dictionary) that should be ignored when
- gathering predictions.
- metric_key_prefix (`str`, *optional*, defaults to `"eval"`):
- An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named
- "eval_bleu" if the prefix is `"eval"` (default)
- max_length (`int`, *optional*):
- The maximum target length to use when predicting with the generate method.
- num_beams (`int`, *optional*):
- Number of beams for beam search that will be used when predicting with the generate method. 1 means no
- beam search.
- gen_kwargs:
- Additional `generate` specific kwargs.
-
-
-
- If your predictions or labels have different sequence lengths (for instance because you're doing dynamic
- padding in a token classification task) the predictions will be padded (on the right) to allow for
- concatenation into one array. The padding index is -100.
-
-
-
- Returns: *NamedTuple* A namedtuple with the following keys:
-
- - predictions (`np.ndarray`): The predictions on `test_dataset`.
- - label_ids (`np.ndarray`, *optional*): The labels (if the dataset contained some).
- - metrics (`Dict[str, float]`, *optional*): The potential dictionary of metrics (if the dataset contained
- labels).
- """
-
- gen_kwargs = gen_kwargs.copy()
- if gen_kwargs.get("max_length") is None and gen_kwargs.get("max_new_tokens") is None:
- gen_kwargs["max_length"] = self.args.generation_max_length
- gen_kwargs["num_beams"] = (
- gen_kwargs["num_beams"] if gen_kwargs.get("num_beams") is not None else self.args.generation_num_beams
- )
- self._gen_kwargs = gen_kwargs
-
- return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
-
- def prediction_step(
- self,
- model: nn.Module,
- inputs: Dict[str, Union[torch.Tensor, Any]],
- prediction_loss_only: bool,
- ignore_keys: Optional[List[str]] = None,
- ) -> Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]:
- """
- Perform an evaluation step on `model` using `inputs`.
-
- Subclass and override to inject custom behavior.
-
- Args:
- model (`nn.Module`):
- The model to evaluate.
- inputs (`Dict[str, Union[torch.Tensor, Any]]`):
- The inputs and targets of the model.
-
- The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
- argument `labels`. Check your model's documentation for all accepted arguments.
- prediction_loss_only (`bool`):
- Whether or not to return the loss only.
-
- Return:
- Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]: A tuple with the loss, logits and
- labels (each being optional).
- """
-
- if not self.args.predict_with_generate or prediction_loss_only:
- return super().prediction_step(
- model, inputs, prediction_loss_only=prediction_loss_only, ignore_keys=ignore_keys
- )
-
- has_labels = "labels" in inputs
- inputs = self._prepare_inputs(inputs)
-
- # XXX: adapt synced_gpus for fairscale as well
- gen_kwargs = self._gen_kwargs.copy()
- if gen_kwargs.get("max_length") is None and gen_kwargs.get("max_new_tokens") is None:
- gen_kwargs["max_length"] = self.model.config.max_length
- gen_kwargs["num_beams"] = (
- gen_kwargs["num_beams"] if gen_kwargs.get("num_beams") is not None else self.model.config.num_beams
- )
- default_synced_gpus = True if is_deepspeed_zero3_enabled() else False
- gen_kwargs["synced_gpus"] = (
- gen_kwargs["synced_gpus"] if gen_kwargs.get("synced_gpus") is not None else default_synced_gpus
- )
-
- # TODO (Joao): the following line is needed to keep a consistent result on SQUAD. Ideally, we should not block
- # users from preparing a dataset with `decoder_input_ids`.
- inputs = {k: v for k, v in inputs.items() if k != "decoder_input_ids"}
- generated_tokens = self.model.generate(**inputs, **gen_kwargs)
-
- # Temporary hack to ensure the generation config is not initialized for each iteration of the evaluation loop
- # TODO: remove this hack when the legacy code that initializes generation_config from a model config is
- # removed in https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1183
- if self.model.generation_config._from_model_config:
- self.model.generation_config._from_model_config = False
- # in case the batch is shorter than max length, the output should be padded
- if gen_kwargs.get("max_length") is not None and generated_tokens.shape[-1] < gen_kwargs["max_length"]:
- generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_kwargs["max_length"])
- elif gen_kwargs.get("max_new_tokens") is not None and generated_tokens.shape[-1] < (
- gen_kwargs["max_new_tokens"] + 1
- ):
- generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_kwargs["max_new_tokens"] + 1)
-
- with torch.no_grad():
- if has_labels:
- with self.compute_loss_context_manager():
- outputs = model(**inputs)
- if self.label_smoother is not None:
- loss = self.label_smoother(outputs, inputs["labels"]).mean().detach()
- else:
- loss = (outputs["loss"] if isinstance(outputs, dict) else outputs[0]).mean().detach()
- else:
- loss = None
-
- if self.args.prediction_loss_only:
- return (loss, None, None)
-
- if has_labels:
- labels = inputs["labels"]
- if gen_kwargs.get("max_length") is not None and labels.shape[-1] < gen_kwargs["max_length"]:
- labels = self._pad_tensors_to_max_len(labels, gen_kwargs["max_length"])
- elif gen_kwargs.get("max_new_tokens") is not None and labels.shape[-1] < (
- gen_kwargs["max_new_tokens"] + 1
- ):
- labels = self._pad_tensors_to_max_len(labels, (gen_kwargs["max_new_tokens"] + 1))
- else:
- labels = None
-
- return (loss, generated_tokens, labels)
-
- def _pad_tensors_to_max_len(self, tensor, max_length):
- if self.tokenizer is not None and hasattr(self.tokenizer, "pad_token_id"):
- # If PAD token is not defined at least EOS token has to be defined
- pad_token_id = (
- self.tokenizer.pad_token_id if self.tokenizer.pad_token_id is not None else self.tokenizer.eos_token_id
- )
- else:
- if self.model.config.pad_token_id is not None:
- pad_token_id = self.model.config.pad_token_id
- else:
- raise ValueError("Pad_token_id must be set in the configuration of the model, in order to pad tensors")
-
- padded_tensor = pad_token_id * torch.ones(
- (tensor.shape[0], max_length), dtype=tensor.dtype, device=tensor.device
- )
- padded_tensor[:, : tensor.shape[-1]] = tensor
- return padded_tensor
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/momentsPen.c b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/momentsPen.c
deleted file mode 100644
index c62288eb66721af85314bd75c3f98a622c112012..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/momentsPen.c
+++ /dev/null
@@ -1,10242 +0,0 @@
-/* Generated by Cython 0.29.36 */
-
-/* BEGIN: Cython Metadata
-{
- "distutils": {
- "name": "fontTools.pens.momentsPen",
- "sources": [
- "Lib/fontTools/pens/momentsPen.py"
- ]
- },
- "module_name": "fontTools.pens.momentsPen"
-}
-END: Cython Metadata */
-
-#ifndef PY_SSIZE_T_CLEAN
-#define PY_SSIZE_T_CLEAN
-#endif /* PY_SSIZE_T_CLEAN */
-#include "Python.h"
-#ifndef Py_PYTHON_H
- #error Python headers needed to compile C extensions, please install development version of Python.
-#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000)
- #error Cython requires Python 2.6+ or Python 3.3+.
-#else
-#define CYTHON_ABI "0_29_36"
-#define CYTHON_HEX_VERSION 0x001D24F0
-#define CYTHON_FUTURE_DIVISION 1
-#include
-#ifndef offsetof
- #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )
-#endif
-#if !defined(WIN32) && !defined(MS_WINDOWS)
- #ifndef __stdcall
- #define __stdcall
- #endif
- #ifndef __cdecl
- #define __cdecl
- #endif
- #ifndef __fastcall
- #define __fastcall
- #endif
-#endif
-#ifndef DL_IMPORT
- #define DL_IMPORT(t) t
-#endif
-#ifndef DL_EXPORT
- #define DL_EXPORT(t) t
-#endif
-#define __PYX_COMMA ,
-#ifndef HAVE_LONG_LONG
- #if PY_VERSION_HEX >= 0x02070000
- #define HAVE_LONG_LONG
- #endif
-#endif
-#ifndef PY_LONG_LONG
- #define PY_LONG_LONG LONG_LONG
-#endif
-#ifndef Py_HUGE_VAL
- #define Py_HUGE_VAL HUGE_VAL
-#endif
-#ifdef PYPY_VERSION
- #define CYTHON_COMPILING_IN_PYPY 1
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 0
- #define CYTHON_COMPILING_IN_NOGIL 0
- #undef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 0
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #if PY_VERSION_HEX < 0x03050000
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #elif !defined(CYTHON_USE_ASYNC_SLOTS)
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #undef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 0
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #undef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 1
- #undef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 0
- #undef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 0
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #if PY_VERSION_HEX < 0x03090000
- #undef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 0
- #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT)
- #define CYTHON_PEP489_MULTI_PHASE_INIT 1
- #endif
- #undef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1 && PYPY_VERSION_NUM >= 0x07030C00)
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
- #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC
- #define CYTHON_UPDATE_DESCRIPTOR_DOC 0
- #endif
-#elif defined(PYSTON_VERSION)
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 1
- #define CYTHON_COMPILING_IN_CPYTHON 0
- #define CYTHON_COMPILING_IN_NOGIL 0
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #undef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 0
- #undef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE 0
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
- #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC
- #define CYTHON_UPDATE_DESCRIPTOR_DOC 0
- #endif
-#elif defined(PY_NOGIL)
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 0
- #define CYTHON_COMPILING_IN_NOGIL 1
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #ifndef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #undef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 0
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #undef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL 0
- #ifndef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT 1
- #endif
- #ifndef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE 1
- #endif
- #undef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS 0
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
-#else
- #define CYTHON_COMPILING_IN_PYPY 0
- #define CYTHON_COMPILING_IN_PYSTON 0
- #define CYTHON_COMPILING_IN_CPYTHON 1
- #define CYTHON_COMPILING_IN_NOGIL 0
- #ifndef CYTHON_USE_TYPE_SLOTS
- #define CYTHON_USE_TYPE_SLOTS 1
- #endif
- #if PY_VERSION_HEX < 0x02070000
- #undef CYTHON_USE_PYTYPE_LOOKUP
- #define CYTHON_USE_PYTYPE_LOOKUP 0
- #elif !defined(CYTHON_USE_PYTYPE_LOOKUP)
- #define CYTHON_USE_PYTYPE_LOOKUP 1
- #endif
- #if PY_MAJOR_VERSION < 3
- #undef CYTHON_USE_ASYNC_SLOTS
- #define CYTHON_USE_ASYNC_SLOTS 0
- #elif !defined(CYTHON_USE_ASYNC_SLOTS)
- #define CYTHON_USE_ASYNC_SLOTS 1
- #endif
- #if PY_VERSION_HEX < 0x02070000
- #undef CYTHON_USE_PYLONG_INTERNALS
- #define CYTHON_USE_PYLONG_INTERNALS 0
- #elif !defined(CYTHON_USE_PYLONG_INTERNALS)
- #define CYTHON_USE_PYLONG_INTERNALS (PY_VERSION_HEX < 0x030C00A5)
- #endif
- #ifndef CYTHON_USE_PYLIST_INTERNALS
- #define CYTHON_USE_PYLIST_INTERNALS 1
- #endif
- #ifndef CYTHON_USE_UNICODE_INTERNALS
- #define CYTHON_USE_UNICODE_INTERNALS 1
- #endif
- #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2
- #undef CYTHON_USE_UNICODE_WRITER
- #define CYTHON_USE_UNICODE_WRITER 0
- #elif !defined(CYTHON_USE_UNICODE_WRITER)
- #define CYTHON_USE_UNICODE_WRITER 1
- #endif
- #ifndef CYTHON_AVOID_BORROWED_REFS
- #define CYTHON_AVOID_BORROWED_REFS 0
- #endif
- #ifndef CYTHON_ASSUME_SAFE_MACROS
- #define CYTHON_ASSUME_SAFE_MACROS 1
- #endif
- #ifndef CYTHON_UNPACK_METHODS
- #define CYTHON_UNPACK_METHODS 1
- #endif
- #if PY_VERSION_HEX >= 0x030B00A4
- #undef CYTHON_FAST_THREAD_STATE
- #define CYTHON_FAST_THREAD_STATE 0
- #elif !defined(CYTHON_FAST_THREAD_STATE)
- #define CYTHON_FAST_THREAD_STATE 1
- #endif
- #ifndef CYTHON_FAST_PYCALL
- #define CYTHON_FAST_PYCALL (PY_VERSION_HEX < 0x030A0000)
- #endif
- #ifndef CYTHON_PEP489_MULTI_PHASE_INIT
- #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000)
- #endif
- #ifndef CYTHON_USE_TP_FINALIZE
- #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1)
- #endif
- #ifndef CYTHON_USE_DICT_VERSIONS
- #define CYTHON_USE_DICT_VERSIONS ((PY_VERSION_HEX >= 0x030600B1) && (PY_VERSION_HEX < 0x030C00A5))
- #endif
- #if PY_VERSION_HEX >= 0x030B00A4
- #undef CYTHON_USE_EXC_INFO_STACK
- #define CYTHON_USE_EXC_INFO_STACK 0
- #elif !defined(CYTHON_USE_EXC_INFO_STACK)
- #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3)
- #endif
- #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC
- #define CYTHON_UPDATE_DESCRIPTOR_DOC 1
- #endif
-#endif
-#if !defined(CYTHON_FAST_PYCCALL)
-#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)
-#endif
-#if CYTHON_USE_PYLONG_INTERNALS
- #if PY_MAJOR_VERSION < 3
- #include "longintrepr.h"
- #endif
- #undef SHIFT
- #undef BASE
- #undef MASK
- #ifdef SIZEOF_VOID_P
- enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) };
- #endif
-#endif
-#ifndef __has_attribute
- #define __has_attribute(x) 0
-#endif
-#ifndef __has_cpp_attribute
- #define __has_cpp_attribute(x) 0
-#endif
-#ifndef CYTHON_RESTRICT
- #if defined(__GNUC__)
- #define CYTHON_RESTRICT __restrict__
- #elif defined(_MSC_VER) && _MSC_VER >= 1400
- #define CYTHON_RESTRICT __restrict
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_RESTRICT restrict
- #else
- #define CYTHON_RESTRICT
- #endif
-#endif
-#ifndef CYTHON_UNUSED
-# if defined(__GNUC__)
-# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))
-# define CYTHON_UNUSED __attribute__ ((__unused__))
-# else
-# define CYTHON_UNUSED
-# endif
-#endif
-#ifndef CYTHON_MAYBE_UNUSED_VAR
-# if defined(__cplusplus)
- template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }
-# else
-# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)
-# endif
-#endif
-#ifndef CYTHON_NCP_UNUSED
-# if CYTHON_COMPILING_IN_CPYTHON
-# define CYTHON_NCP_UNUSED
-# else
-# define CYTHON_NCP_UNUSED CYTHON_UNUSED
-# endif
-#endif
-#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)
-#ifdef _MSC_VER
- #ifndef _MSC_STDINT_H_
- #if _MSC_VER < 1300
- typedef unsigned char uint8_t;
- typedef unsigned int uint32_t;
- #else
- typedef unsigned __int8 uint8_t;
- typedef unsigned __int32 uint32_t;
- #endif
- #endif
-#else
- #include
-#endif
-#ifndef CYTHON_FALLTHROUGH
- #if defined(__cplusplus) && __cplusplus >= 201103L
- #if __has_cpp_attribute(fallthrough)
- #define CYTHON_FALLTHROUGH [[fallthrough]]
- #elif __has_cpp_attribute(clang::fallthrough)
- #define CYTHON_FALLTHROUGH [[clang::fallthrough]]
- #elif __has_cpp_attribute(gnu::fallthrough)
- #define CYTHON_FALLTHROUGH [[gnu::fallthrough]]
- #endif
- #endif
- #ifndef CYTHON_FALLTHROUGH
- #if __has_attribute(fallthrough)
- #define CYTHON_FALLTHROUGH __attribute__((fallthrough))
- #else
- #define CYTHON_FALLTHROUGH
- #endif
- #endif
- #if defined(__clang__ ) && defined(__apple_build_version__)
- #if __apple_build_version__ < 7000000
- #undef CYTHON_FALLTHROUGH
- #define CYTHON_FALLTHROUGH
- #endif
- #endif
-#endif
-
-#ifndef CYTHON_INLINE
- #if defined(__clang__)
- #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
- #elif defined(__GNUC__)
- #define CYTHON_INLINE __inline__
- #elif defined(_MSC_VER)
- #define CYTHON_INLINE __inline
- #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define CYTHON_INLINE inline
- #else
- #define CYTHON_INLINE
- #endif
-#endif
-
-#define __PYX_BUILD_PY_SSIZE_T "n"
-#define CYTHON_FORMAT_SSIZE_T "z"
-#if PY_MAJOR_VERSION < 3
- #define __Pyx_BUILTIN_MODULE_NAME "__builtin__"
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
- PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
- #define __Pyx_DefaultClassType PyClass_Type
-#else
- #define __Pyx_BUILTIN_MODULE_NAME "builtins"
- #define __Pyx_DefaultClassType PyType_Type
-#if PY_VERSION_HEX >= 0x030B00A1
- static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int k, int l, int s, int f,
- PyObject *code, PyObject *c, PyObject* n, PyObject *v,
- PyObject *fv, PyObject *cell, PyObject* fn,
- PyObject *name, int fline, PyObject *lnos) {
- PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL;
- PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *call_result=NULL, *empty=NULL;
- const char *fn_cstr=NULL;
- const char *name_cstr=NULL;
- PyCodeObject* co=NULL;
- PyObject *type, *value, *traceback;
- PyErr_Fetch(&type, &value, &traceback);
- if (!(kwds=PyDict_New())) goto end;
- if (!(argcount=PyLong_FromLong(a))) goto end;
- if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end;
- if (!(posonlyargcount=PyLong_FromLong(0))) goto end;
- if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end;
- if (!(kwonlyargcount=PyLong_FromLong(k))) goto end;
- if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end;
- if (!(nlocals=PyLong_FromLong(l))) goto end;
- if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end;
- if (!(stacksize=PyLong_FromLong(s))) goto end;
- if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end;
- if (!(flags=PyLong_FromLong(f))) goto end;
- if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end;
- if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end;
- if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end;
- if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end;
- if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end;
- if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto cleanup_code_too;
- if (!(empty = PyTuple_New(0))) goto cleanup_code_too; // unfortunately __pyx_empty_tuple isn't available here
- if (!(call_result = PyObject_Call(replace, empty, kwds))) goto cleanup_code_too;
- Py_XDECREF((PyObject*)co);
- co = (PyCodeObject*)call_result;
- call_result = NULL;
- if (0) {
- cleanup_code_too:
- Py_XDECREF((PyObject*)co);
- co = NULL;
- }
- end:
- Py_XDECREF(kwds);
- Py_XDECREF(argcount);
- Py_XDECREF(posonlyargcount);
- Py_XDECREF(kwonlyargcount);
- Py_XDECREF(nlocals);
- Py_XDECREF(stacksize);
- Py_XDECREF(replace);
- Py_XDECREF(call_result);
- Py_XDECREF(empty);
- if (type) {
- PyErr_Restore(type, value, traceback);
- }
- return co;
- }
-#else
- #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
- PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
-#endif
- #define __Pyx_DefaultClassType PyType_Type
-#endif
-#if PY_VERSION_HEX >= 0x030900F0 && !CYTHON_COMPILING_IN_PYPY
- #define __Pyx_PyObject_GC_IsFinalized(o) PyObject_GC_IsFinalized(o)
-#else
- #define __Pyx_PyObject_GC_IsFinalized(o) _PyGC_FINALIZED(o)
-#endif
-#ifndef Py_TPFLAGS_CHECKTYPES
- #define Py_TPFLAGS_CHECKTYPES 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_INDEX
- #define Py_TPFLAGS_HAVE_INDEX 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_NEWBUFFER
- #define Py_TPFLAGS_HAVE_NEWBUFFER 0
-#endif
-#ifndef Py_TPFLAGS_HAVE_FINALIZE
- #define Py_TPFLAGS_HAVE_FINALIZE 0
-#endif
-#ifndef METH_STACKLESS
- #define METH_STACKLESS 0
-#endif
-#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL)
- #ifndef METH_FASTCALL
- #define METH_FASTCALL 0x80
- #endif
- typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs);
- typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args,
- Py_ssize_t nargs, PyObject *kwnames);
-#else
- #define __Pyx_PyCFunctionFast _PyCFunctionFast
- #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords
-#endif
-#if CYTHON_FAST_PYCCALL
-#define __Pyx_PyFastCFunction_Check(func)\
- ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)))))
-#else
-#define __Pyx_PyFastCFunction_Check(func) 0
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)
- #define PyObject_Malloc(s) PyMem_Malloc(s)
- #define PyObject_Free(p) PyMem_Free(p)
- #define PyObject_Realloc(p) PyMem_Realloc(p)
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1
- #define PyMem_RawMalloc(n) PyMem_Malloc(n)
- #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n)
- #define PyMem_RawFree(p) PyMem_Free(p)
-#endif
-#if CYTHON_COMPILING_IN_PYSTON
- #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)
-#else
- #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0)
- #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
-#endif
-#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000
- #define __Pyx_PyThreadState_Current PyThreadState_GET()
-#elif PY_VERSION_HEX >= 0x03060000
- #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()
-#elif PY_VERSION_HEX >= 0x03000000
- #define __Pyx_PyThreadState_Current PyThreadState_GET()
-#else
- #define __Pyx_PyThreadState_Current _PyThreadState_Current
-#endif
-#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT)
-#include "pythread.h"
-#define Py_tss_NEEDS_INIT 0
-typedef int Py_tss_t;
-static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) {
- *key = PyThread_create_key();
- return 0;
-}
-static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) {
- Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t));
- *key = Py_tss_NEEDS_INIT;
- return key;
-}
-static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) {
- PyObject_Free(key);
-}
-static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) {
- return *key != Py_tss_NEEDS_INIT;
-}
-static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) {
- PyThread_delete_key(*key);
- *key = Py_tss_NEEDS_INIT;
-}
-static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) {
- return PyThread_set_key_value(*key, value);
-}
-static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {
- return PyThread_get_key_value(*key);
-}
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized)
-#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n))
-#else
-#define __Pyx_PyDict_NewPresized(n) PyDict_New()
-#endif
-#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION
- #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)
- #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y)
-#else
- #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y)
- #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y)
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS
-#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash)
-#else
-#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name)
-#endif
-#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)
- #define CYTHON_PEP393_ENABLED 1
- #if PY_VERSION_HEX >= 0x030C0000
- #define __Pyx_PyUnicode_READY(op) (0)
- #else
- #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\
- 0 : _PyUnicode_Ready((PyObject *)(op)))
- #endif
- #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u)
- #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)
- #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u)
- #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u)
- #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u)
- #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i)
- #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch)
- #if PY_VERSION_HEX >= 0x030C0000
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u))
- #else
- #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length))
- #else
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))
- #endif
- #endif
-#else
- #define CYTHON_PEP393_ENABLED 0
- #define PyUnicode_1BYTE_KIND 1
- #define PyUnicode_2BYTE_KIND 2
- #define PyUnicode_4BYTE_KIND 4
- #define __Pyx_PyUnicode_READY(op) (0)
- #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u)
- #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))
- #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)
- #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE))
- #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u))
- #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))
- #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch)
- #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u))
-#endif
-#if CYTHON_COMPILING_IN_PYPY
- #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b)
- #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b)
-#else
- #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b)
- #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\
- PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)
- #define PyUnicode_Contains(u, s) PySequence_Contains(u, s)
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)
- #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type)
-#endif
-#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)
- #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt)
-#endif
-#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))
-#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b)
-#else
- #define __Pyx_PyString_Format(a, b) PyString_Format(a, b)
-#endif
-#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)
- #define PyObject_ASCII(o) PyObject_Repr(o)
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyBaseString_Type PyUnicode_Type
- #define PyStringObject PyUnicodeObject
- #define PyString_Type PyUnicode_Type
- #define PyString_Check PyUnicode_Check
- #define PyString_CheckExact PyUnicode_CheckExact
-#ifndef PyObject_Unicode
- #define PyObject_Unicode PyObject_Str
-#endif
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)
- #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)
-#else
- #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))
- #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))
-#endif
-#ifndef PySet_CheckExact
- #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type)
-#endif
-#if PY_VERSION_HEX >= 0x030900A4
- #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt)
- #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size)
-#else
- #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt)
- #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size)
-#endif
-#if CYTHON_ASSUME_SAFE_MACROS
- #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq)
-#else
- #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq)
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyIntObject PyLongObject
- #define PyInt_Type PyLong_Type
- #define PyInt_Check(op) PyLong_Check(op)
- #define PyInt_CheckExact(op) PyLong_CheckExact(op)
- #define PyInt_FromString PyLong_FromString
- #define PyInt_FromUnicode PyLong_FromUnicode
- #define PyInt_FromLong PyLong_FromLong
- #define PyInt_FromSize_t PyLong_FromSize_t
- #define PyInt_FromSsize_t PyLong_FromSsize_t
- #define PyInt_AsLong PyLong_AsLong
- #define PyInt_AS_LONG PyLong_AS_LONG
- #define PyInt_AsSsize_t PyLong_AsSsize_t
- #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask
- #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask
- #define PyNumber_Int PyNumber_Long
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define PyBoolObject PyLongObject
-#endif
-#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY
- #ifndef PyUnicode_InternFromString
- #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)
- #endif
-#endif
-#if PY_VERSION_HEX < 0x030200A4
- typedef long Py_hash_t;
- #define __Pyx_PyInt_FromHash_t PyInt_FromLong
- #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t
-#else
- #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t
- #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t
-#endif
-#if PY_MAJOR_VERSION >= 3
- #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func))
-#else
- #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)
-#endif
-#if CYTHON_USE_ASYNC_SLOTS
- #if PY_VERSION_HEX >= 0x030500B1
- #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods
- #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)
- #else
- #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))
- #endif
-#else
- #define __Pyx_PyType_AsAsync(obj) NULL
-#endif
-#ifndef __Pyx_PyAsyncMethodsStruct
- typedef struct {
- unaryfunc am_await;
- unaryfunc am_aiter;
- unaryfunc am_anext;
- } __Pyx_PyAsyncMethodsStruct;
-#endif
-
-#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS)
- #if !defined(_USE_MATH_DEFINES)
- #define _USE_MATH_DEFINES
- #endif
-#endif
-#include
-#ifdef NAN
-#define __PYX_NAN() ((float) NAN)
-#else
-static CYTHON_INLINE float __PYX_NAN() {
- float value;
- memset(&value, 0xFF, sizeof(value));
- return value;
-}
-#endif
-#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)
-#define __Pyx_truncl trunc
-#else
-#define __Pyx_truncl truncl
-#endif
-
-#define __PYX_MARK_ERR_POS(f_index, lineno) \
- { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; }
-#define __PYX_ERR(f_index, lineno, Ln_error) \
- { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; }
-
-#ifndef __PYX_EXTERN_C
- #ifdef __cplusplus
- #define __PYX_EXTERN_C extern "C"
- #else
- #define __PYX_EXTERN_C extern
- #endif
-#endif
-
-#define __PYX_HAVE__fontTools__pens__momentsPen
-#define __PYX_HAVE_API__fontTools__pens__momentsPen
-/* Early includes */
-#ifdef _OPENMP
-#include
-#endif /* _OPENMP */
-
-#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS)
-#define CYTHON_WITHOUT_ASSERTIONS
-#endif
-
-typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;
- const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;
-
-#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0
-#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0
-#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8)
-#define __PYX_DEFAULT_STRING_ENCODING ""
-#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString
-#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
-#define __Pyx_uchar_cast(c) ((unsigned char)c)
-#define __Pyx_long_cast(x) ((long)x)
-#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\
- (sizeof(type) < sizeof(Py_ssize_t)) ||\
- (sizeof(type) > sizeof(Py_ssize_t) &&\
- likely(v < (type)PY_SSIZE_T_MAX ||\
- v == (type)PY_SSIZE_T_MAX) &&\
- (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\
- v == (type)PY_SSIZE_T_MIN))) ||\
- (sizeof(type) == sizeof(Py_ssize_t) &&\
- (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\
- v == (type)PY_SSIZE_T_MAX))) )
-static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) {
- return (size_t) i < (size_t) limit;
-}
-#if defined (__cplusplus) && __cplusplus >= 201103L
- #include
- #define __Pyx_sst_abs(value) std::abs(value)
-#elif SIZEOF_INT >= SIZEOF_SIZE_T
- #define __Pyx_sst_abs(value) abs(value)
-#elif SIZEOF_LONG >= SIZEOF_SIZE_T
- #define __Pyx_sst_abs(value) labs(value)
-#elif defined (_MSC_VER)
- #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value))
-#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
- #define __Pyx_sst_abs(value) llabs(value)
-#elif defined (__GNUC__)
- #define __Pyx_sst_abs(value) __builtin_llabs(value)
-#else
- #define __Pyx_sst_abs(value) ((value<0) ? -value : value)
-#endif
-static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*);
-static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);
-#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))
-#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)
-#define __Pyx_PyBytes_FromString PyBytes_FromString
-#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize
-static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);
-#if PY_MAJOR_VERSION < 3
- #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString
- #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
-#else
- #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString
- #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize
-#endif
-#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s))
-#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s))
-#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s)
-#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s)
-#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s)
-#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s)
-#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)
-static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
- const Py_UNICODE *u_end = u;
- while (*u_end++) ;
- return (size_t)(u_end - u - 1);
-}
-#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))
-#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode
-#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode
-#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)
-#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)
-static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b);
-static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);
-static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*);
-static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);
-#define __Pyx_PySequence_Tuple(obj)\
- (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj))
-static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);
-static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);
-static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*);
-#if CYTHON_ASSUME_SAFE_MACROS
-#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))
-#else
-#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)
-#endif
-#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))
-#if PY_MAJOR_VERSION >= 3
-#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))
-#else
-#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))
-#endif
-#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))
-#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
-static int __Pyx_sys_getdefaultencoding_not_ascii;
-static int __Pyx_init_sys_getdefaultencoding_params(void) {
- PyObject* sys;
- PyObject* default_encoding = NULL;
- PyObject* ascii_chars_u = NULL;
- PyObject* ascii_chars_b = NULL;
- const char* default_encoding_c;
- sys = PyImport_ImportModule("sys");
- if (!sys) goto bad;
- default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL);
- Py_DECREF(sys);
- if (!default_encoding) goto bad;
- default_encoding_c = PyBytes_AsString(default_encoding);
- if (!default_encoding_c) goto bad;
- if (strcmp(default_encoding_c, "ascii") == 0) {
- __Pyx_sys_getdefaultencoding_not_ascii = 0;
- } else {
- char ascii_chars[128];
- int c;
- for (c = 0; c < 128; c++) {
- ascii_chars[c] = c;
- }
- __Pyx_sys_getdefaultencoding_not_ascii = 1;
- ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);
- if (!ascii_chars_u) goto bad;
- ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);
- if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {
- PyErr_Format(
- PyExc_ValueError,
- "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.",
- default_encoding_c);
- goto bad;
- }
- Py_DECREF(ascii_chars_u);
- Py_DECREF(ascii_chars_b);
- }
- Py_DECREF(default_encoding);
- return 0;
-bad:
- Py_XDECREF(default_encoding);
- Py_XDECREF(ascii_chars_u);
- Py_XDECREF(ascii_chars_b);
- return -1;
-}
-#endif
-#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3
-#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)
-#else
-#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)
-#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
-static char* __PYX_DEFAULT_STRING_ENCODING;
-static int __Pyx_init_sys_getdefaultencoding_params(void) {
- PyObject* sys;
- PyObject* default_encoding = NULL;
- char* default_encoding_c;
- sys = PyImport_ImportModule("sys");
- if (!sys) goto bad;
- default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL);
- Py_DECREF(sys);
- if (!default_encoding) goto bad;
- default_encoding_c = PyBytes_AsString(default_encoding);
- if (!default_encoding_c) goto bad;
- __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1);
- if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;
- strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);
- Py_DECREF(default_encoding);
- return 0;
-bad:
- Py_XDECREF(default_encoding);
- return -1;
-}
-#endif
-#endif
-
-
-/* Test for GCC > 2.95 */
-#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))
- #define likely(x) __builtin_expect(!!(x), 1)
- #define unlikely(x) __builtin_expect(!!(x), 0)
-#else /* !__GNUC__ or GCC < 2.95 */
- #define likely(x) (x)
- #define unlikely(x) (x)
-#endif /* __GNUC__ */
-static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; }
-
-static PyObject *__pyx_m = NULL;
-static PyObject *__pyx_d;
-static PyObject *__pyx_b;
-static PyObject *__pyx_cython_runtime = NULL;
-static PyObject *__pyx_empty_tuple;
-static PyObject *__pyx_empty_bytes;
-static PyObject *__pyx_empty_unicode;
-static int __pyx_lineno;
-static int __pyx_clineno = 0;
-static const char * __pyx_cfilenm= __FILE__;
-static const char *__pyx_filename;
-
-
-static const char *__pyx_f[] = {
- "Lib/fontTools/pens/momentsPen.py",
-};
-
-/*--- Type declarations ---*/
-
-/* --- Runtime support code (head) --- */
-/* Refnanny.proto */
-#ifndef CYTHON_REFNANNY
- #define CYTHON_REFNANNY 0
-#endif
-#if CYTHON_REFNANNY
- typedef struct {
- void (*INCREF)(void*, PyObject*, int);
- void (*DECREF)(void*, PyObject*, int);
- void (*GOTREF)(void*, PyObject*, int);
- void (*GIVEREF)(void*, PyObject*, int);
- void* (*SetupContext)(const char*, int, const char*);
- void (*FinishContext)(void**);
- } __Pyx_RefNannyAPIStruct;
- static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;
- static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);
- #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;
-#ifdef WITH_THREAD
- #define __Pyx_RefNannySetupContext(name, acquire_gil)\
- if (acquire_gil) {\
- PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
- PyGILState_Release(__pyx_gilstate_save);\
- } else {\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
- }
-#else
- #define __Pyx_RefNannySetupContext(name, acquire_gil)\
- __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)
-#endif
- #define __Pyx_RefNannyFinishContext()\
- __Pyx_RefNanny->FinishContext(&__pyx_refnanny)
- #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
- #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)
- #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)
- #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)
- #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)
-#else
- #define __Pyx_RefNannyDeclarations
- #define __Pyx_RefNannySetupContext(name, acquire_gil)
- #define __Pyx_RefNannyFinishContext()
- #define __Pyx_INCREF(r) Py_INCREF(r)
- #define __Pyx_DECREF(r) Py_DECREF(r)
- #define __Pyx_GOTREF(r)
- #define __Pyx_GIVEREF(r)
- #define __Pyx_XINCREF(r) Py_XINCREF(r)
- #define __Pyx_XDECREF(r) Py_XDECREF(r)
- #define __Pyx_XGOTREF(r)
- #define __Pyx_XGIVEREF(r)
-#endif
-#define __Pyx_XDECREF_SET(r, v) do {\
- PyObject *tmp = (PyObject *) r;\
- r = v; __Pyx_XDECREF(tmp);\
- } while (0)
-#define __Pyx_DECREF_SET(r, v) do {\
- PyObject *tmp = (PyObject *) r;\
- r = v; __Pyx_DECREF(tmp);\
- } while (0)
-#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)
-#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)
-
-/* PyObjectGetAttrStr.proto */
-#if CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name);
-#else
-#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)
-#endif
-
-/* GetBuiltinName.proto */
-static PyObject *__Pyx_GetBuiltinName(PyObject *name);
-
-/* RaiseDoubleKeywords.proto */
-static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);
-
-/* ParseKeywords.proto */
-static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\
- PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\
- const char* function_name);
-
-/* RaiseArgTupleInvalid.proto */
-static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,
- Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);
-
-/* PyDictVersioning.proto */
-#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS
-#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1)
-#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
-#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\
- (version_var) = __PYX_GET_DICT_VERSION(dict);\
- (cache_var) = (value);
-#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\
- static PY_UINT64_T __pyx_dict_version = 0;\
- static PyObject *__pyx_dict_cached_value = NULL;\
- if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\
- (VAR) = __pyx_dict_cached_value;\
- } else {\
- (VAR) = __pyx_dict_cached_value = (LOOKUP);\
- __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\
- }\
-}
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj);
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj);
-static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version);
-#else
-#define __PYX_GET_DICT_VERSION(dict) (0)
-#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)
-#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP);
-#endif
-
-/* GetModuleGlobalName.proto */
-#if CYTHON_USE_DICT_VERSIONS
-#define __Pyx_GetModuleGlobalName(var, name) do {\
- static PY_UINT64_T __pyx_dict_version = 0;\
- static PyObject *__pyx_dict_cached_value = NULL;\
- (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\
- (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\
- __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\
-} while(0)
-#define __Pyx_GetModuleGlobalNameUncached(var, name) do {\
- PY_UINT64_T __pyx_dict_version;\
- PyObject *__pyx_dict_cached_value;\
- (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\
-} while(0)
-static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value);
-#else
-#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name)
-#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name)
-static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name);
-#endif
-
-/* PyFunctionFastCall.proto */
-#if CYTHON_FAST_PYCALL
-#define __Pyx_PyFunction_FastCall(func, args, nargs)\
- __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL)
-#if 1 || PY_VERSION_HEX < 0x030600B1
-static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs);
-#else
-#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs)
-#endif
-#define __Pyx_BUILD_ASSERT_EXPR(cond)\
- (sizeof(char [1 - 2*!(cond)]) - 1)
-#ifndef Py_MEMBER_SIZE
-#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member)
-#endif
-#if CYTHON_FAST_PYCALL
- static size_t __pyx_pyframe_localsplus_offset = 0;
- #include "frameobject.h"
-#if PY_VERSION_HEX >= 0x030b00a6
- #ifndef Py_BUILD_CORE
- #define Py_BUILD_CORE 1
- #endif
- #include "internal/pycore_frame.h"
-#endif
- #define __Pxy_PyFrame_Initialize_Offsets()\
- ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\
- (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus)))
- #define __Pyx_PyFrame_GetLocalsplus(frame)\
- (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset))
-#endif // CYTHON_FAST_PYCALL
-#endif
-
-/* PyCFunctionFastCall.proto */
-#if CYTHON_FAST_PYCCALL
-static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs);
-#else
-#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL)
-#endif
-
-/* PyObjectCall.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);
-#else
-#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)
-#endif
-
-/* PyObjectSetAttrStr.proto */
-#if CYTHON_USE_TYPE_SLOTS
-#define __Pyx_PyObject_DelAttrStr(o,n) __Pyx_PyObject_SetAttrStr(o, n, NULL)
-static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value);
-#else
-#define __Pyx_PyObject_DelAttrStr(o,n) PyObject_DelAttr(o,n)
-#define __Pyx_PyObject_SetAttrStr(o,n,v) PyObject_SetAttr(o,n,v)
-#endif
-
-/* PyObjectCallMethO.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg);
-#endif
-
-/* PyObjectCallNoArg.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func);
-#else
-#define __Pyx_PyObject_CallNoArg(func) __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL)
-#endif
-
-/* PyObjectCallOneArg.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg);
-
-/* PyObjectCall2Args.proto */
-static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2);
-
-/* PyThreadStateGet.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate;
-#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current;
-#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type
-#else
-#define __Pyx_PyThreadState_declare
-#define __Pyx_PyThreadState_assign
-#define __Pyx_PyErr_Occurred() PyErr_Occurred()
-#endif
-
-/* PyErrFetchRestore.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL)
-#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)
-#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)
-#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)
-#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
-static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#if CYTHON_COMPILING_IN_CPYTHON
-#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL))
-#else
-#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
-#endif
-#else
-#define __Pyx_PyErr_Clear() PyErr_Clear()
-#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
-#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb)
-#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb)
-#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb)
-#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb)
-#endif
-
-/* RaiseException.proto */
-static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);
-
-/* RaiseTooManyValuesToUnpack.proto */
-static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected);
-
-/* RaiseNeedMoreValuesToUnpack.proto */
-static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index);
-
-/* IterFinish.proto */
-static CYTHON_INLINE int __Pyx_IterFinish(void);
-
-/* UnpackItemEndCheck.proto */
-static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected);
-
-/* Import.proto */
-static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);
-
-/* ImportFrom.proto */
-static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name);
-
-/* GetTopmostException.proto */
-#if CYTHON_USE_EXC_INFO_STACK
-static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate);
-#endif
-
-/* SaveResetException.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)
-static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
-#else
-#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb)
-#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb)
-#endif
-
-/* PyErrExceptionMatches.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)
-static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);
-#else
-#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err)
-#endif
-
-/* GetException.proto */
-#if CYTHON_FAST_THREAD_STATE
-#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb)
-static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
-#else
-static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);
-#endif
-
-/* CalculateMetaclass.proto */
-static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases);
-
-/* FetchCommonType.proto */
-static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type);
-
-/* CythonFunctionShared.proto */
-#define __Pyx_CyFunction_USED 1
-#define __Pyx_CYFUNCTION_STATICMETHOD 0x01
-#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02
-#define __Pyx_CYFUNCTION_CCLASS 0x04
-#define __Pyx_CyFunction_GetClosure(f)\
- (((__pyx_CyFunctionObject *) (f))->func_closure)
-#define __Pyx_CyFunction_GetClassObj(f)\
- (((__pyx_CyFunctionObject *) (f))->func_classobj)
-#define __Pyx_CyFunction_Defaults(type, f)\
- ((type *)(((__pyx_CyFunctionObject *) (f))->defaults))
-#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\
- ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g)
-typedef struct {
- PyCFunctionObject func;
-#if PY_VERSION_HEX < 0x030500A0
- PyObject *func_weakreflist;
-#endif
- PyObject *func_dict;
- PyObject *func_name;
- PyObject *func_qualname;
- PyObject *func_doc;
- PyObject *func_globals;
- PyObject *func_code;
- PyObject *func_closure;
- PyObject *func_classobj;
- void *defaults;
- int defaults_pyobjects;
- size_t defaults_size; // used by FusedFunction for copying defaults
- int flags;
- PyObject *defaults_tuple;
- PyObject *defaults_kwdict;
- PyObject *(*defaults_getter)(PyObject *);
- PyObject *func_annotations;
-} __pyx_CyFunctionObject;
-static PyTypeObject *__pyx_CyFunctionType = 0;
-#define __Pyx_CyFunction_Check(obj) (__Pyx_TypeCheck(obj, __pyx_CyFunctionType))
-static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject* op, PyMethodDef *ml,
- int flags, PyObject* qualname,
- PyObject *self,
- PyObject *module, PyObject *globals,
- PyObject* code);
-static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *m,
- size_t size,
- int pyobjects);
-static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m,
- PyObject *tuple);
-static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m,
- PyObject *dict);
-static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m,
- PyObject *dict);
-static int __pyx_CyFunction_init(void);
-
-/* CythonFunction.proto */
-static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml,
- int flags, PyObject* qualname,
- PyObject *closure,
- PyObject *module, PyObject *globals,
- PyObject* code);
-
-/* SetNameInClass.proto */
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1
-#define __Pyx_SetNameInClass(ns, name, value)\
- (likely(PyDict_CheckExact(ns)) ? _PyDict_SetItem_KnownHash(ns, name, value, ((PyASCIIObject *) name)->hash) : PyObject_SetItem(ns, name, value))
-#elif CYTHON_COMPILING_IN_CPYTHON
-#define __Pyx_SetNameInClass(ns, name, value)\
- (likely(PyDict_CheckExact(ns)) ? PyDict_SetItem(ns, name, value) : PyObject_SetItem(ns, name, value))
-#else
-#define __Pyx_SetNameInClass(ns, name, value) PyObject_SetItem(ns, name, value)
-#endif
-
-/* Py3ClassCreate.proto */
-static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name, PyObject *qualname,
- PyObject *mkw, PyObject *modname, PyObject *doc);
-static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases, PyObject *dict,
- PyObject *mkw, int calculate_metaclass, int allow_py2_metaclass);
-
-/* IncludeStringH.proto */
-#include
-
-/* BytesEquals.proto */
-static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals);
-
-/* UnicodeEquals.proto */
-static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals);
-
-/* CLineInTraceback.proto */
-#ifdef CYTHON_CLINE_IN_TRACEBACK
-#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0)
-#else
-static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line);
-#endif
-
-/* CodeObjectCache.proto */
-typedef struct {
- PyCodeObject* code_object;
- int code_line;
-} __Pyx_CodeObjectCacheEntry;
-struct __Pyx_CodeObjectCache {
- int count;
- int max_count;
- __Pyx_CodeObjectCacheEntry* entries;
-};
-static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};
-static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);
-static PyCodeObject *__pyx_find_code_object(int code_line);
-static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);
-
-/* AddTraceback.proto */
-static void __Pyx_AddTraceback(const char *funcname, int c_line,
- int py_line, const char *filename);
-
-/* GCCDiagnostics.proto */
-#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6))
-#define __Pyx_HAS_GCC_DIAGNOSTIC
-#endif
-
-/* CIntToPy.proto */
-static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);
-
-/* CIntFromPy.proto */
-static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);
-
-/* CIntFromPy.proto */
-static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);
-
-/* FastTypeChecks.proto */
-#if CYTHON_COMPILING_IN_CPYTHON
-#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type)
-static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b);
-static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type);
-static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2);
-#else
-#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)
-#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type)
-#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2))
-#endif
-#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)
-
-/* CheckBinaryVersion.proto */
-static int __Pyx_check_binary_version(void);
-
-/* InitStrings.proto */
-static int __Pyx_InitStrings(__Pyx_StringTabEntry *t);
-
-
-/* Module declarations from 'cython' */
-
-/* Module declarations from 'fontTools.pens.momentsPen' */
-#define __Pyx_MODULE_NAME "fontTools.pens.momentsPen"
-extern int __pyx_module_is_main_fontTools__pens__momentsPen;
-int __pyx_module_is_main_fontTools__pens__momentsPen = 0;
-
-/* Implementation of 'fontTools.pens.momentsPen' */
-static PyObject *__pyx_builtin_AttributeError;
-static PyObject *__pyx_builtin_ImportError;
-static const char __pyx_k_x[] = "x";
-static const char __pyx_k_y[] = "y";
-static const char __pyx_k_p0[] = "p0";
-static const char __pyx_k_p1[] = "p1";
-static const char __pyx_k_p2[] = "p2";
-static const char __pyx_k_p3[] = "p3";
-static const char __pyx_k_r0[] = "r0";
-static const char __pyx_k_r1[] = "r1";
-static const char __pyx_k_r2[] = "r2";
-static const char __pyx_k_r3[] = "r3";
-static const char __pyx_k_r4[] = "r4";
-static const char __pyx_k_r5[] = "r5";
-static const char __pyx_k_r6[] = "r6";
-static const char __pyx_k_r7[] = "r7";
-static const char __pyx_k_r8[] = "r8";
-static const char __pyx_k_r9[] = "r9";
-static const char __pyx_k_x0[] = "x0";
-static const char __pyx_k_x1[] = "x1";
-static const char __pyx_k_x2[] = "x2";
-static const char __pyx_k_x3[] = "x3";
-static const char __pyx_k_y0[] = "y0";
-static const char __pyx_k_y1[] = "y1";
-static const char __pyx_k_y2[] = "y2";
-static const char __pyx_k_y3[] = "y3";
-static const char __pyx_k_all[] = "__all__";
-static const char __pyx_k_doc[] = "__doc__";
-static const char __pyx_k_r10[] = "r10";
-static const char __pyx_k_r11[] = "r11";
-static const char __pyx_k_r12[] = "r12";
-static const char __pyx_k_r13[] = "r13";
-static const char __pyx_k_r14[] = "r14";
-static const char __pyx_k_r15[] = "r15";
-static const char __pyx_k_r16[] = "r16";
-static const char __pyx_k_r17[] = "r17";
-static const char __pyx_k_r18[] = "r18";
-static const char __pyx_k_r19[] = "r19";
-static const char __pyx_k_r20[] = "r20";
-static const char __pyx_k_r21[] = "r21";
-static const char __pyx_k_r22[] = "r22";
-static const char __pyx_k_r23[] = "r23";
-static const char __pyx_k_r24[] = "r24";
-static const char __pyx_k_r25[] = "r25";
-static const char __pyx_k_r26[] = "r26";
-static const char __pyx_k_r27[] = "r27";
-static const char __pyx_k_r28[] = "r28";
-static const char __pyx_k_r29[] = "r29";
-static const char __pyx_k_r30[] = "r30";
-static const char __pyx_k_r31[] = "r31";
-static const char __pyx_k_r32[] = "r32";
-static const char __pyx_k_r33[] = "r33";
-static const char __pyx_k_r34[] = "r34";
-static const char __pyx_k_r35[] = "r35";
-static const char __pyx_k_r36[] = "r36";
-static const char __pyx_k_r37[] = "r37";
-static const char __pyx_k_r38[] = "r38";
-static const char __pyx_k_r39[] = "r39";
-static const char __pyx_k_r40[] = "r40";
-static const char __pyx_k_r41[] = "r41";
-static const char __pyx_k_r42[] = "r42";
-static const char __pyx_k_r43[] = "r43";
-static const char __pyx_k_r44[] = "r44";
-static const char __pyx_k_r45[] = "r45";
-static const char __pyx_k_r46[] = "r46";
-static const char __pyx_k_r47[] = "r47";
-static const char __pyx_k_r48[] = "r48";
-static const char __pyx_k_r49[] = "r49";
-static const char __pyx_k_r50[] = "r50";
-static const char __pyx_k_r51[] = "r51";
-static const char __pyx_k_r52[] = "r52";
-static const char __pyx_k_r53[] = "r53";
-static const char __pyx_k_r54[] = "r54";
-static const char __pyx_k_r55[] = "r55";
-static const char __pyx_k_r56[] = "r56";
-static const char __pyx_k_r57[] = "r57";
-static const char __pyx_k_r58[] = "r58";
-static const char __pyx_k_r59[] = "r59";
-static const char __pyx_k_r60[] = "r60";
-static const char __pyx_k_r61[] = "r61";
-static const char __pyx_k_r62[] = "r62";
-static const char __pyx_k_r63[] = "r63";
-static const char __pyx_k_r64[] = "r64";
-static const char __pyx_k_r65[] = "r65";
-static const char __pyx_k_r66[] = "r66";
-static const char __pyx_k_r67[] = "r67";
-static const char __pyx_k_r68[] = "r68";
-static const char __pyx_k_r69[] = "r69";
-static const char __pyx_k_r70[] = "r70";
-static const char __pyx_k_r71[] = "r71";
-static const char __pyx_k_r72[] = "r72";
-static const char __pyx_k_r73[] = "r73";
-static const char __pyx_k_r74[] = "r74";
-static const char __pyx_k_r75[] = "r75";
-static const char __pyx_k_r76[] = "r76";
-static const char __pyx_k_r77[] = "r77";
-static const char __pyx_k_r78[] = "r78";
-static const char __pyx_k_r79[] = "r79";
-static const char __pyx_k_r80[] = "r80";
-static const char __pyx_k_r81[] = "r81";
-static const char __pyx_k_r82[] = "r82";
-static const char __pyx_k_r83[] = "r83";
-static const char __pyx_k_r84[] = "r84";
-static const char __pyx_k_r85[] = "r85";
-static const char __pyx_k_r86[] = "r86";
-static const char __pyx_k_r87[] = "r87";
-static const char __pyx_k_r88[] = "r88";
-static const char __pyx_k_r89[] = "r89";
-static const char __pyx_k_r90[] = "r90";
-static const char __pyx_k_r91[] = "r91";
-static const char __pyx_k_r92[] = "r92";
-static const char __pyx_k_r93[] = "r93";
-static const char __pyx_k_r94[] = "r94";
-static const char __pyx_k_r95[] = "r95";
-static const char __pyx_k_r96[] = "r96";
-static const char __pyx_k_r97[] = "r97";
-static const char __pyx_k_r98[] = "r98";
-static const char __pyx_k_r99[] = "r99";
-static const char __pyx_k_area[] = "area";
-static const char __pyx_k_init[] = "__init__";
-static const char __pyx_k_main[] = "__main__";
-static const char __pyx_k_name[] = "__name__";
-static const char __pyx_k_r100[] = "r100";
-static const char __pyx_k_r101[] = "r101";
-static const char __pyx_k_r102[] = "r102";
-static const char __pyx_k_r103[] = "r103";
-static const char __pyx_k_r104[] = "r104";
-static const char __pyx_k_r105[] = "r105";
-static const char __pyx_k_r106[] = "r106";
-static const char __pyx_k_r107[] = "r107";
-static const char __pyx_k_r108[] = "r108";
-static const char __pyx_k_r109[] = "r109";
-static const char __pyx_k_r110[] = "r110";
-static const char __pyx_k_r111[] = "r111";
-static const char __pyx_k_r112[] = "r112";
-static const char __pyx_k_r113[] = "r113";
-static const char __pyx_k_r114[] = "r114";
-static const char __pyx_k_r115[] = "r115";
-static const char __pyx_k_r116[] = "r116";
-static const char __pyx_k_r117[] = "r117";
-static const char __pyx_k_r118[] = "r118";
-static const char __pyx_k_r119[] = "r119";
-static const char __pyx_k_r120[] = "r120";
-static const char __pyx_k_r121[] = "r121";
-static const char __pyx_k_r122[] = "r122";
-static const char __pyx_k_r123[] = "r123";
-static const char __pyx_k_r124[] = "r124";
-static const char __pyx_k_r125[] = "r125";
-static const char __pyx_k_r126[] = "r126";
-static const char __pyx_k_r127[] = "r127";
-static const char __pyx_k_r128[] = "r128";
-static const char __pyx_k_r129[] = "r129";
-static const char __pyx_k_r130[] = "r130";
-static const char __pyx_k_r131[] = "r131";
-static const char __pyx_k_r132[] = "r132";
-static const char __pyx_k_self[] = "self";
-static const char __pyx_k_test[] = "__test__";
-static const char __pyx_k_cython[] = "cython";
-static const char __pyx_k_import[] = "__import__";
-static const char __pyx_k_lineTo[] = "_lineTo";
-static const char __pyx_k_module[] = "__module__";
-static const char __pyx_k_moveTo[] = "_moveTo";
-static const char __pyx_k_BasePen[] = "BasePen";
-static const char __pyx_k_endPath[] = "_endPath";
-static const char __pyx_k_momentX[] = "momentX";
-static const char __pyx_k_momentY[] = "momentY";
-static const char __pyx_k_prepare[] = "__prepare__";
-static const char __pyx_k_COMPILED[] = "COMPILED";
-static const char __pyx_k_glyphset[] = "glyphset";
-static const char __pyx_k_momentXX[] = "momentXX";
-static const char __pyx_k_momentXY[] = "momentXY";
-static const char __pyx_k_momentYY[] = "momentYY";
-static const char __pyx_k_qualname[] = "__qualname__";
-static const char __pyx_k_closePath[] = "_closePath";
-static const char __pyx_k_metaclass[] = "__metaclass__";
-static const char __pyx_k_MomentsPen[] = "MomentsPen";
-static const char __pyx_k_curveToOne[] = "_curveToOne";
-static const char __pyx_k_ImportError[] = "ImportError";
-static const char __pyx_k_qCurveToOne[] = "_qCurveToOne";
-static const char __pyx_k_printGreenPen[] = "printGreenPen";
-static const char __pyx_k_AttributeError[] = "AttributeError";
-static const char __pyx_k_fontTools_misc[] = "fontTools.misc";
-static const char __pyx_k_getCurrentPoint[] = "_getCurrentPoint";
-static const char __pyx_k_OpenContourError[] = "OpenContourError";
-static const char __pyx_k_MomentsPen___init[] = "MomentsPen.__init__";
-static const char __pyx_k_MomentsPen__lineTo[] = "MomentsPen._lineTo";
-static const char __pyx_k_MomentsPen__moveTo[] = "MomentsPen._moveTo";
-static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback";
-static const char __pyx_k_MomentsPen__endPath[] = "MomentsPen._endPath";
-static const char __pyx_k_MomentsPen__closePath[] = "MomentsPen._closePath";
-static const char __pyx_k_MomentsPen__curveToOne[] = "MomentsPen._curveToOne";
-static const char __pyx_k_MomentsPen__startPoint[] = "_MomentsPen__startPoint";
-static const char __pyx_k_fontTools_misc_symfont[] = "fontTools.misc.symfont";
-static const char __pyx_k_fontTools_pens_basePen[] = "fontTools.pens.basePen";
-static const char __pyx_k_MomentsPen__qCurveToOne[] = "MomentsPen._qCurveToOne";
-static const char __pyx_k_fontTools_pens_momentsPen[] = "fontTools.pens.momentsPen";
-static const char __pyx_k_Green_theorem_is_not_defined_on[] = "Green theorem is not defined on open contours.";
-static const char __pyx_k_Lib_fontTools_pens_momentsPen_py[] = "Lib/fontTools/pens/momentsPen.py";
-static PyObject *__pyx_n_s_AttributeError;
-static PyObject *__pyx_n_s_BasePen;
-static PyObject *__pyx_n_s_COMPILED;
-static PyObject *__pyx_kp_u_Green_theorem_is_not_defined_on;
-static PyObject *__pyx_n_s_ImportError;
-static PyObject *__pyx_kp_s_Lib_fontTools_pens_momentsPen_py;
-static PyObject *__pyx_n_s_MomentsPen;
-static PyObject *__pyx_n_u_MomentsPen;
-static PyObject *__pyx_n_s_MomentsPen___init;
-static PyObject *__pyx_n_s_MomentsPen__closePath;
-static PyObject *__pyx_n_s_MomentsPen__curveToOne;
-static PyObject *__pyx_n_s_MomentsPen__endPath;
-static PyObject *__pyx_n_s_MomentsPen__lineTo;
-static PyObject *__pyx_n_s_MomentsPen__moveTo;
-static PyObject *__pyx_n_s_MomentsPen__qCurveToOne;
-static PyObject *__pyx_n_s_MomentsPen__startPoint;
-static PyObject *__pyx_n_s_OpenContourError;
-static PyObject *__pyx_n_s_all;
-static PyObject *__pyx_n_s_area;
-static PyObject *__pyx_n_u_area;
-static PyObject *__pyx_n_s_cline_in_traceback;
-static PyObject *__pyx_n_s_closePath;
-static PyObject *__pyx_n_s_curveToOne;
-static PyObject *__pyx_n_s_cython;
-static PyObject *__pyx_n_s_doc;
-static PyObject *__pyx_n_s_endPath;
-static PyObject *__pyx_n_s_fontTools_misc;
-static PyObject *__pyx_n_s_fontTools_misc_symfont;
-static PyObject *__pyx_n_s_fontTools_pens_basePen;
-static PyObject *__pyx_n_s_fontTools_pens_momentsPen;
-static PyObject *__pyx_n_s_getCurrentPoint;
-static PyObject *__pyx_n_s_glyphset;
-static PyObject *__pyx_n_s_import;
-static PyObject *__pyx_n_s_init;
-static PyObject *__pyx_n_s_lineTo;
-static PyObject *__pyx_n_s_main;
-static PyObject *__pyx_n_u_main;
-static PyObject *__pyx_n_s_metaclass;
-static PyObject *__pyx_n_s_module;
-static PyObject *__pyx_n_s_momentX;
-static PyObject *__pyx_n_u_momentX;
-static PyObject *__pyx_n_s_momentXX;
-static PyObject *__pyx_n_u_momentXX;
-static PyObject *__pyx_n_s_momentXY;
-static PyObject *__pyx_n_u_momentXY;
-static PyObject *__pyx_n_s_momentY;
-static PyObject *__pyx_n_u_momentY;
-static PyObject *__pyx_n_s_momentYY;
-static PyObject *__pyx_n_u_momentYY;
-static PyObject *__pyx_n_s_moveTo;
-static PyObject *__pyx_n_s_name;
-static PyObject *__pyx_n_s_p0;
-static PyObject *__pyx_n_s_p1;
-static PyObject *__pyx_n_s_p2;
-static PyObject *__pyx_n_s_p3;
-static PyObject *__pyx_n_s_prepare;
-static PyObject *__pyx_n_s_printGreenPen;
-static PyObject *__pyx_n_s_qCurveToOne;
-static PyObject *__pyx_n_s_qualname;
-static PyObject *__pyx_n_s_r0;
-static PyObject *__pyx_n_s_r1;
-static PyObject *__pyx_n_s_r10;
-static PyObject *__pyx_n_s_r100;
-static PyObject *__pyx_n_s_r101;
-static PyObject *__pyx_n_s_r102;
-static PyObject *__pyx_n_s_r103;
-static PyObject *__pyx_n_s_r104;
-static PyObject *__pyx_n_s_r105;
-static PyObject *__pyx_n_s_r106;
-static PyObject *__pyx_n_s_r107;
-static PyObject *__pyx_n_s_r108;
-static PyObject *__pyx_n_s_r109;
-static PyObject *__pyx_n_s_r11;
-static PyObject *__pyx_n_s_r110;
-static PyObject *__pyx_n_s_r111;
-static PyObject *__pyx_n_s_r112;
-static PyObject *__pyx_n_s_r113;
-static PyObject *__pyx_n_s_r114;
-static PyObject *__pyx_n_s_r115;
-static PyObject *__pyx_n_s_r116;
-static PyObject *__pyx_n_s_r117;
-static PyObject *__pyx_n_s_r118;
-static PyObject *__pyx_n_s_r119;
-static PyObject *__pyx_n_s_r12;
-static PyObject *__pyx_n_s_r120;
-static PyObject *__pyx_n_s_r121;
-static PyObject *__pyx_n_s_r122;
-static PyObject *__pyx_n_s_r123;
-static PyObject *__pyx_n_s_r124;
-static PyObject *__pyx_n_s_r125;
-static PyObject *__pyx_n_s_r126;
-static PyObject *__pyx_n_s_r127;
-static PyObject *__pyx_n_s_r128;
-static PyObject *__pyx_n_s_r129;
-static PyObject *__pyx_n_s_r13;
-static PyObject *__pyx_n_s_r130;
-static PyObject *__pyx_n_s_r131;
-static PyObject *__pyx_n_s_r132;
-static PyObject *__pyx_n_s_r14;
-static PyObject *__pyx_n_s_r15;
-static PyObject *__pyx_n_s_r16;
-static PyObject *__pyx_n_s_r17;
-static PyObject *__pyx_n_s_r18;
-static PyObject *__pyx_n_s_r19;
-static PyObject *__pyx_n_s_r2;
-static PyObject *__pyx_n_s_r20;
-static PyObject *__pyx_n_s_r21;
-static PyObject *__pyx_n_s_r22;
-static PyObject *__pyx_n_s_r23;
-static PyObject *__pyx_n_s_r24;
-static PyObject *__pyx_n_s_r25;
-static PyObject *__pyx_n_s_r26;
-static PyObject *__pyx_n_s_r27;
-static PyObject *__pyx_n_s_r28;
-static PyObject *__pyx_n_s_r29;
-static PyObject *__pyx_n_s_r3;
-static PyObject *__pyx_n_s_r30;
-static PyObject *__pyx_n_s_r31;
-static PyObject *__pyx_n_s_r32;
-static PyObject *__pyx_n_s_r33;
-static PyObject *__pyx_n_s_r34;
-static PyObject *__pyx_n_s_r35;
-static PyObject *__pyx_n_s_r36;
-static PyObject *__pyx_n_s_r37;
-static PyObject *__pyx_n_s_r38;
-static PyObject *__pyx_n_s_r39;
-static PyObject *__pyx_n_s_r4;
-static PyObject *__pyx_n_s_r40;
-static PyObject *__pyx_n_s_r41;
-static PyObject *__pyx_n_s_r42;
-static PyObject *__pyx_n_s_r43;
-static PyObject *__pyx_n_s_r44;
-static PyObject *__pyx_n_s_r45;
-static PyObject *__pyx_n_s_r46;
-static PyObject *__pyx_n_s_r47;
-static PyObject *__pyx_n_s_r48;
-static PyObject *__pyx_n_s_r49;
-static PyObject *__pyx_n_s_r5;
-static PyObject *__pyx_n_s_r50;
-static PyObject *__pyx_n_s_r51;
-static PyObject *__pyx_n_s_r52;
-static PyObject *__pyx_n_s_r53;
-static PyObject *__pyx_n_s_r54;
-static PyObject *__pyx_n_s_r55;
-static PyObject *__pyx_n_s_r56;
-static PyObject *__pyx_n_s_r57;
-static PyObject *__pyx_n_s_r58;
-static PyObject *__pyx_n_s_r59;
-static PyObject *__pyx_n_s_r6;
-static PyObject *__pyx_n_s_r60;
-static PyObject *__pyx_n_s_r61;
-static PyObject *__pyx_n_s_r62;
-static PyObject *__pyx_n_s_r63;
-static PyObject *__pyx_n_s_r64;
-static PyObject *__pyx_n_s_r65;
-static PyObject *__pyx_n_s_r66;
-static PyObject *__pyx_n_s_r67;
-static PyObject *__pyx_n_s_r68;
-static PyObject *__pyx_n_s_r69;
-static PyObject *__pyx_n_s_r7;
-static PyObject *__pyx_n_s_r70;
-static PyObject *__pyx_n_s_r71;
-static PyObject *__pyx_n_s_r72;
-static PyObject *__pyx_n_s_r73;
-static PyObject *__pyx_n_s_r74;
-static PyObject *__pyx_n_s_r75;
-static PyObject *__pyx_n_s_r76;
-static PyObject *__pyx_n_s_r77;
-static PyObject *__pyx_n_s_r78;
-static PyObject *__pyx_n_s_r79;
-static PyObject *__pyx_n_s_r8;
-static PyObject *__pyx_n_s_r80;
-static PyObject *__pyx_n_s_r81;
-static PyObject *__pyx_n_s_r82;
-static PyObject *__pyx_n_s_r83;
-static PyObject *__pyx_n_s_r84;
-static PyObject *__pyx_n_s_r85;
-static PyObject *__pyx_n_s_r86;
-static PyObject *__pyx_n_s_r87;
-static PyObject *__pyx_n_s_r88;
-static PyObject *__pyx_n_s_r89;
-static PyObject *__pyx_n_s_r9;
-static PyObject *__pyx_n_s_r90;
-static PyObject *__pyx_n_s_r91;
-static PyObject *__pyx_n_s_r92;
-static PyObject *__pyx_n_s_r93;
-static PyObject *__pyx_n_s_r94;
-static PyObject *__pyx_n_s_r95;
-static PyObject *__pyx_n_s_r96;
-static PyObject *__pyx_n_s_r97;
-static PyObject *__pyx_n_s_r98;
-static PyObject *__pyx_n_s_r99;
-static PyObject *__pyx_n_s_self;
-static PyObject *__pyx_n_s_test;
-static PyObject *__pyx_n_s_x;
-static PyObject *__pyx_n_s_x0;
-static PyObject *__pyx_n_s_x1;
-static PyObject *__pyx_n_s_x2;
-static PyObject *__pyx_n_s_x3;
-static PyObject *__pyx_n_s_y;
-static PyObject *__pyx_n_s_y0;
-static PyObject *__pyx_n_s_y1;
-static PyObject *__pyx_n_s_y2;
-static PyObject *__pyx_n_s_y3;
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_glyphset); /* proto */
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p0); /* proto */
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self); /* proto */
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1); /* proto */
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2); /* proto */
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2, PyObject *__pyx_v_p3); /* proto */
-static PyObject *__pyx_int_0;
-static PyObject *__pyx_int_1;
-static PyObject *__pyx_int_2;
-static PyObject *__pyx_tuple_;
-static PyObject *__pyx_tuple__3;
-static PyObject *__pyx_tuple__4;
-static PyObject *__pyx_tuple__6;
-static PyObject *__pyx_tuple__8;
-static PyObject *__pyx_tuple__10;
-static PyObject *__pyx_tuple__12;
-static PyObject *__pyx_tuple__14;
-static PyObject *__pyx_tuple__16;
-static PyObject *__pyx_codeobj__2;
-static PyObject *__pyx_codeobj__5;
-static PyObject *__pyx_codeobj__7;
-static PyObject *__pyx_codeobj__9;
-static PyObject *__pyx_codeobj__11;
-static PyObject *__pyx_codeobj__13;
-static PyObject *__pyx_codeobj__15;
-/* Late includes */
-
-/* "fontTools/pens/momentsPen.py":18
- *
- * class MomentsPen(BasePen):
- * def __init__(self, glyphset=None): # <<<<<<<<<<<<<<
- * BasePen.__init__(self, glyphset)
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen___init__[] = "MomentsPen.__init__(self, glyphset=None)";
-static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__ = {"__init__", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen___init__};
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_self = 0;
- PyObject *__pyx_v_glyphset = 0;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__init__ (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_glyphset,0};
- PyObject* values[2] = {0,0};
- values[1] = ((PyObject *)((PyObject *)Py_None));
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (kw_args > 0) {
- PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_glyphset);
- if (value) { values[1] = value; kw_args--; }
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(0, 18, __pyx_L3_error)
- }
- } else {
- switch (PyTuple_GET_SIZE(__pyx_args)) {
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- break;
- default: goto __pyx_L5_argtuple_error;
- }
- }
- __pyx_v_self = values[0];
- __pyx_v_glyphset = values[1];
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("__init__", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 18, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return NULL;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen___init__(__pyx_self, __pyx_v_self, __pyx_v_glyphset);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen___init__(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_glyphset) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("__init__", 0);
-
- /* "fontTools/pens/momentsPen.py":19
- * class MomentsPen(BasePen):
- * def __init__(self, glyphset=None):
- * BasePen.__init__(self, glyphset) # <<<<<<<<<<<<<<
- *
- * self.area = 0
- */
- __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_BasePen); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 19, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_init); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 19, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_2 = NULL;
- __pyx_t_4 = 0;
- if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
- __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3);
- if (likely(__pyx_t_2)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
- __Pyx_INCREF(__pyx_t_2);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_3, function);
- __pyx_t_4 = 1;
- }
- }
- #if CYTHON_FAST_PYCALL
- if (PyFunction_Check(__pyx_t_3)) {
- PyObject *__pyx_temp[3] = {__pyx_t_2, __pyx_v_self, __pyx_v_glyphset};
- __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_4, 2+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error)
- __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_GOTREF(__pyx_t_1);
- } else
- #endif
- #if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(__pyx_t_3)) {
- PyObject *__pyx_temp[3] = {__pyx_t_2, __pyx_v_self, __pyx_v_glyphset};
- __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_3, __pyx_temp+1-__pyx_t_4, 2+__pyx_t_4); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error)
- __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_GOTREF(__pyx_t_1);
- } else
- #endif
- {
- __pyx_t_5 = PyTuple_New(2+__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 19, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_5);
- if (__pyx_t_2) {
- __Pyx_GIVEREF(__pyx_t_2); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_2); __pyx_t_2 = NULL;
- }
- __Pyx_INCREF(__pyx_v_self);
- __Pyx_GIVEREF(__pyx_v_self);
- PyTuple_SET_ITEM(__pyx_t_5, 0+__pyx_t_4, __pyx_v_self);
- __Pyx_INCREF(__pyx_v_glyphset);
- __Pyx_GIVEREF(__pyx_v_glyphset);
- PyTuple_SET_ITEM(__pyx_t_5, 1+__pyx_t_4, __pyx_v_glyphset);
- __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 19, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
- }
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "fontTools/pens/momentsPen.py":21
- * BasePen.__init__(self, glyphset)
- *
- * self.area = 0 # <<<<<<<<<<<<<<
- * self.momentX = 0
- * self.momentY = 0
- */
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_int_0) < 0) __PYX_ERR(0, 21, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":22
- *
- * self.area = 0
- * self.momentX = 0 # <<<<<<<<<<<<<<
- * self.momentY = 0
- * self.momentXX = 0
- */
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_int_0) < 0) __PYX_ERR(0, 22, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":23
- * self.area = 0
- * self.momentX = 0
- * self.momentY = 0 # <<<<<<<<<<<<<<
- * self.momentXX = 0
- * self.momentXY = 0
- */
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_int_0) < 0) __PYX_ERR(0, 23, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":24
- * self.momentX = 0
- * self.momentY = 0
- * self.momentXX = 0 # <<<<<<<<<<<<<<
- * self.momentXY = 0
- * self.momentYY = 0
- */
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_int_0) < 0) __PYX_ERR(0, 24, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":25
- * self.momentY = 0
- * self.momentXX = 0
- * self.momentXY = 0 # <<<<<<<<<<<<<<
- * self.momentYY = 0
- *
- */
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_int_0) < 0) __PYX_ERR(0, 25, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":26
- * self.momentXX = 0
- * self.momentXY = 0
- * self.momentYY = 0 # <<<<<<<<<<<<<<
- *
- * def _moveTo(self, p0):
- */
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_int_0) < 0) __PYX_ERR(0, 26, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":18
- *
- * class MomentsPen(BasePen):
- * def __init__(self, glyphset=None): # <<<<<<<<<<<<<<
- * BasePen.__init__(self, glyphset)
- *
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "fontTools/pens/momentsPen.py":28
- * self.momentYY = 0
- *
- * def _moveTo(self, p0): # <<<<<<<<<<<<<<
- * self.__startPoint = p0
- *
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo[] = "MomentsPen._moveTo(self, p0)";
-static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo = {"_moveTo", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo};
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_self = 0;
- PyObject *__pyx_v_p0 = 0;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("_moveTo (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p0,0};
- PyObject* values[2] = {0,0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p0)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("_moveTo", 1, 2, 2, 1); __PYX_ERR(0, 28, __pyx_L3_error)
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_moveTo") < 0)) __PYX_ERR(0, 28, __pyx_L3_error)
- }
- } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
- goto __pyx_L5_argtuple_error;
- } else {
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- }
- __pyx_v_self = values[0];
- __pyx_v_p0 = values[1];
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("_moveTo", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 28, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._moveTo", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return NULL;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo(__pyx_self, __pyx_v_self, __pyx_v_p0);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_2_moveTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p0) {
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("_moveTo", 0);
-
- /* "fontTools/pens/momentsPen.py":29
- *
- * def _moveTo(self, p0):
- * self.__startPoint = p0 # <<<<<<<<<<<<<<
- *
- * def _closePath(self):
- */
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint, __pyx_v_p0) < 0) __PYX_ERR(0, 29, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":28
- * self.momentYY = 0
- *
- * def _moveTo(self, p0): # <<<<<<<<<<<<<<
- * self.__startPoint = p0
- *
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._moveTo", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "fontTools/pens/momentsPen.py":31
- * self.__startPoint = p0
- *
- * def _closePath(self): # <<<<<<<<<<<<<<
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint:
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath(PyObject *__pyx_self, PyObject *__pyx_v_self); /*proto*/
-static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath[] = "MomentsPen._closePath(self)";
-static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath = {"_closePath", (PyCFunction)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath, METH_O, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath};
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath(PyObject *__pyx_self, PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("_closePath (wrapper)", 0);
- __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath(__pyx_self, ((PyObject *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_4_closePath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) {
- PyObject *__pyx_v_p0 = NULL;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("_closePath", 0);
-
- /* "fontTools/pens/momentsPen.py":32
- *
- * def _closePath(self):
- * p0 = self._getCurrentPoint() # <<<<<<<<<<<<<<
- * if p0 != self.__startPoint:
- * self._lineTo(self.__startPoint)
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 32, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = NULL;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
- __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
- if (likely(__pyx_t_3)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_2, function);
- }
- }
- __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 32, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_v_p0 = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "fontTools/pens/momentsPen.py":33
- * def _closePath(self):
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint: # <<<<<<<<<<<<<<
- * self._lineTo(self.__startPoint)
- *
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 33, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = PyObject_RichCompare(__pyx_v_p0, __pyx_t_1, Py_NE); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 33, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(0, 33, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- if (__pyx_t_4) {
-
- /* "fontTools/pens/momentsPen.py":34
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint:
- * self._lineTo(self.__startPoint) # <<<<<<<<<<<<<<
- *
- * def _endPath(self):
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_lineTo); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 34, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 34, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_5 = NULL;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
- __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_1);
- if (likely(__pyx_t_5)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
- __Pyx_INCREF(__pyx_t_5);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_1, function);
- }
- }
- __pyx_t_2 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_5, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_3);
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 34, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":33
- * def _closePath(self):
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint: # <<<<<<<<<<<<<<
- * self._lineTo(self.__startPoint)
- *
- */
- }
-
- /* "fontTools/pens/momentsPen.py":31
- * self.__startPoint = p0
- *
- * def _closePath(self): # <<<<<<<<<<<<<<
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint:
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_5);
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._closePath", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_p0);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "fontTools/pens/momentsPen.py":36
- * self._lineTo(self.__startPoint)
- *
- * def _endPath(self): # <<<<<<<<<<<<<<
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint:
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath(PyObject *__pyx_self, PyObject *__pyx_v_self); /*proto*/
-static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath[] = "MomentsPen._endPath(self)";
-static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath = {"_endPath", (PyCFunction)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath, METH_O, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath};
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath(PyObject *__pyx_self, PyObject *__pyx_v_self) {
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("_endPath (wrapper)", 0);
- __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath(__pyx_self, ((PyObject *)__pyx_v_self));
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_6_endPath(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self) {
- PyObject *__pyx_v_p0 = NULL;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- int __pyx_t_4;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("_endPath", 0);
-
- /* "fontTools/pens/momentsPen.py":37
- *
- * def _endPath(self):
- * p0 = self._getCurrentPoint() # <<<<<<<<<<<<<<
- * if p0 != self.__startPoint:
- * # Green theorem is not defined on open contours.
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 37, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = NULL;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
- __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
- if (likely(__pyx_t_3)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_2, function);
- }
- }
- __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 37, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_v_p0 = __pyx_t_1;
- __pyx_t_1 = 0;
-
- /* "fontTools/pens/momentsPen.py":38
- * def _endPath(self):
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint: # <<<<<<<<<<<<<<
- * # Green theorem is not defined on open contours.
- * raise OpenContourError("Green theorem is not defined on open contours.")
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_MomentsPen__startPoint); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = PyObject_RichCompare(__pyx_v_p0, __pyx_t_1, Py_NE); __Pyx_XGOTREF(__pyx_t_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 38, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_2); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(0, 38, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- if (unlikely(__pyx_t_4)) {
-
- /* "fontTools/pens/momentsPen.py":40
- * if p0 != self.__startPoint:
- * # Green theorem is not defined on open contours.
- * raise OpenContourError("Green theorem is not defined on open contours.") # <<<<<<<<<<<<<<
- *
- * @cython.locals(r0=cython.double)
- */
- __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_OpenContourError); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 40, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_3 = NULL;
- if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) {
- __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_1);
- if (likely(__pyx_t_3)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_1, function);
- }
- }
- __pyx_t_2 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_3, __pyx_kp_u_Green_theorem_is_not_defined_on) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_kp_u_Green_theorem_is_not_defined_on);
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 40, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_Raise(__pyx_t_2, 0, 0, 0);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __PYX_ERR(0, 40, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":38
- * def _endPath(self):
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint: # <<<<<<<<<<<<<<
- * # Green theorem is not defined on open contours.
- * raise OpenContourError("Green theorem is not defined on open contours.")
- */
- }
-
- /* "fontTools/pens/momentsPen.py":36
- * self._lineTo(self.__startPoint)
- *
- * def _endPath(self): # <<<<<<<<<<<<<<
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint:
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._endPath", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XDECREF(__pyx_v_p0);
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "fontTools/pens/momentsPen.py":57
- * @cython.locals(x0=cython.double, y0=cython.double)
- * @cython.locals(x1=cython.double, y1=cython.double)
- * def _lineTo(self, p1): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo[] = "MomentsPen._lineTo(self, p1)";
-static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo = {"_lineTo", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo};
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_self = 0;
- PyObject *__pyx_v_p1 = 0;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("_lineTo (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p1,0};
- PyObject* values[2] = {0,0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p1)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("_lineTo", 1, 2, 2, 1); __PYX_ERR(0, 57, __pyx_L3_error)
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_lineTo") < 0)) __PYX_ERR(0, 57, __pyx_L3_error)
- }
- } else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
- goto __pyx_L5_argtuple_error;
- } else {
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- }
- __pyx_v_self = values[0];
- __pyx_v_p1 = values[1];
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("_lineTo", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 57, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._lineTo", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return NULL;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo(__pyx_self, __pyx_v_self, __pyx_v_p1);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_8_lineTo(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1) {
- double __pyx_v_x1;
- double __pyx_v_y1;
- double __pyx_v_x0;
- double __pyx_v_y0;
- double __pyx_v_r12;
- double __pyx_v_r11;
- double __pyx_v_r10;
- double __pyx_v_r9;
- double __pyx_v_r8;
- double __pyx_v_r7;
- double __pyx_v_r6;
- double __pyx_v_r5;
- double __pyx_v_r4;
- double __pyx_v_r3;
- double __pyx_v_r2;
- double __pyx_v_r1;
- double __pyx_v_r0;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *(*__pyx_t_5)(PyObject *);
- double __pyx_t_6;
- double __pyx_t_7;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("_lineTo", 0);
-
- /* "fontTools/pens/momentsPen.py":58
- * @cython.locals(x1=cython.double, y1=cython.double)
- * def _lineTo(self, p1):
- * x0, y0 = self._getCurrentPoint() # <<<<<<<<<<<<<<
- * x1, y1 = p1
- *
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 58, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = NULL;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
- __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
- if (likely(__pyx_t_3)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_2, function);
- }
- }
- __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 58, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) {
- PyObject* sequence = __pyx_t_1;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(0, 58, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- if (likely(PyTuple_CheckExact(sequence))) {
- __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1);
- } else {
- __pyx_t_2 = PyList_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyList_GET_ITEM(sequence, 1);
- }
- __Pyx_INCREF(__pyx_t_2);
- __Pyx_INCREF(__pyx_t_3);
- #else
- __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 58, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 58, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- #endif
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- } else {
- Py_ssize_t index = -1;
- __pyx_t_4 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 58, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_5 = Py_TYPE(__pyx_t_4)->tp_iternext;
- index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_2);
- index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_3);
- if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 58, __pyx_L1_error)
- __pyx_t_5 = NULL;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- goto __pyx_L4_unpacking_done;
- __pyx_L3_unpacking_failed:;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __pyx_t_5 = NULL;
- if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);
- __PYX_ERR(0, 58, __pyx_L1_error)
- __pyx_L4_unpacking_done:;
- }
- __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_2); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 58, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 58, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_x0 = __pyx_t_6;
- __pyx_v_y0 = __pyx_t_7;
-
- /* "fontTools/pens/momentsPen.py":59
- * def _lineTo(self, p1):
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1 # <<<<<<<<<<<<<<
- *
- * r0 = x1 * y0
- */
- if ((likely(PyTuple_CheckExact(__pyx_v_p1))) || (PyList_CheckExact(__pyx_v_p1))) {
- PyObject* sequence = __pyx_v_p1;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(0, 59, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- if (likely(PyTuple_CheckExact(sequence))) {
- __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1);
- } else {
- __pyx_t_1 = PyList_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyList_GET_ITEM(sequence, 1);
- }
- __Pyx_INCREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_t_3);
- #else
- __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 59, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 59, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- #endif
- } else {
- Py_ssize_t index = -1;
- __pyx_t_2 = PyObject_GetIter(__pyx_v_p1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 59, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext;
- index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_1);
- index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_3);
- if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 59, __pyx_L1_error)
- __pyx_t_5 = NULL;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- goto __pyx_L6_unpacking_done;
- __pyx_L5_unpacking_failed:;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_5 = NULL;
- if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);
- __PYX_ERR(0, 59, __pyx_L1_error)
- __pyx_L6_unpacking_done:;
- }
- __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 59, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 59, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_x1 = __pyx_t_7;
- __pyx_v_y1 = __pyx_t_6;
-
- /* "fontTools/pens/momentsPen.py":61
- * x1, y1 = p1
- *
- * r0 = x1 * y0 # <<<<<<<<<<<<<<
- * r1 = x1 * y1
- * r2 = x1**2
- */
- __pyx_v_r0 = (__pyx_v_x1 * __pyx_v_y0);
-
- /* "fontTools/pens/momentsPen.py":62
- *
- * r0 = x1 * y0
- * r1 = x1 * y1 # <<<<<<<<<<<<<<
- * r2 = x1**2
- * r3 = r2 * y1
- */
- __pyx_v_r1 = (__pyx_v_x1 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":63
- * r0 = x1 * y0
- * r1 = x1 * y1
- * r2 = x1**2 # <<<<<<<<<<<<<<
- * r3 = r2 * y1
- * r4 = y0 - y1
- */
- __pyx_v_r2 = pow(__pyx_v_x1, 2.0);
-
- /* "fontTools/pens/momentsPen.py":64
- * r1 = x1 * y1
- * r2 = x1**2
- * r3 = r2 * y1 # <<<<<<<<<<<<<<
- * r4 = y0 - y1
- * r5 = r4 * x0
- */
- __pyx_v_r3 = (__pyx_v_r2 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":65
- * r2 = x1**2
- * r3 = r2 * y1
- * r4 = y0 - y1 # <<<<<<<<<<<<<<
- * r5 = r4 * x0
- * r6 = x0**2
- */
- __pyx_v_r4 = (__pyx_v_y0 - __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":66
- * r3 = r2 * y1
- * r4 = y0 - y1
- * r5 = r4 * x0 # <<<<<<<<<<<<<<
- * r6 = x0**2
- * r7 = 2 * y0
- */
- __pyx_v_r5 = (__pyx_v_r4 * __pyx_v_x0);
-
- /* "fontTools/pens/momentsPen.py":67
- * r4 = y0 - y1
- * r5 = r4 * x0
- * r6 = x0**2 # <<<<<<<<<<<<<<
- * r7 = 2 * y0
- * r8 = y0**2
- */
- __pyx_v_r6 = pow(__pyx_v_x0, 2.0);
-
- /* "fontTools/pens/momentsPen.py":68
- * r5 = r4 * x0
- * r6 = x0**2
- * r7 = 2 * y0 # <<<<<<<<<<<<<<
- * r8 = y0**2
- * r9 = y1**2
- */
- __pyx_v_r7 = (2.0 * __pyx_v_y0);
-
- /* "fontTools/pens/momentsPen.py":69
- * r6 = x0**2
- * r7 = 2 * y0
- * r8 = y0**2 # <<<<<<<<<<<<<<
- * r9 = y1**2
- * r10 = x1**3
- */
- __pyx_v_r8 = pow(__pyx_v_y0, 2.0);
-
- /* "fontTools/pens/momentsPen.py":70
- * r7 = 2 * y0
- * r8 = y0**2
- * r9 = y1**2 # <<<<<<<<<<<<<<
- * r10 = x1**3
- * r11 = y0**3
- */
- __pyx_v_r9 = pow(__pyx_v_y1, 2.0);
-
- /* "fontTools/pens/momentsPen.py":71
- * r8 = y0**2
- * r9 = y1**2
- * r10 = x1**3 # <<<<<<<<<<<<<<
- * r11 = y0**3
- * r12 = y1**3
- */
- __pyx_v_r10 = pow(__pyx_v_x1, 3.0);
-
- /* "fontTools/pens/momentsPen.py":72
- * r9 = y1**2
- * r10 = x1**3
- * r11 = y0**3 # <<<<<<<<<<<<<<
- * r12 = y1**3
- *
- */
- __pyx_v_r11 = pow(__pyx_v_y0, 3.0);
-
- /* "fontTools/pens/momentsPen.py":73
- * r10 = x1**3
- * r11 = y0**3
- * r12 = y1**3 # <<<<<<<<<<<<<<
- *
- * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2
- */
- __pyx_v_r12 = pow(__pyx_v_y1, 3.0);
-
- /* "fontTools/pens/momentsPen.py":75
- * r12 = y1**3
- *
- * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 # <<<<<<<<<<<<<<
- * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6
- * self.momentY += (
- */
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_area); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 75, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_1 = PyFloat_FromDouble(((((-__pyx_v_r0) / 2.0) - (__pyx_v_r1 / 2.0)) + ((__pyx_v_x0 * (__pyx_v_y0 + __pyx_v_y1)) / 2.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 75, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 75, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_t_2) < 0) __PYX_ERR(0, 75, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":76
- *
- * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2
- * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 # <<<<<<<<<<<<<<
- * self.momentY += (
- * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 76, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_1 = PyFloat_FromDouble(((((((-__pyx_v_r2) * __pyx_v_y0) / 6.0) - (__pyx_v_r3 / 3.0)) - ((__pyx_v_r5 * __pyx_v_x1) / 6.0)) + ((__pyx_v_r6 * (__pyx_v_r7 + __pyx_v_y1)) / 6.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 76, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 76, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_t_3) < 0) __PYX_ERR(0, 76, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "fontTools/pens/momentsPen.py":77
- * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2
- * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6
- * self.momentY += ( # <<<<<<<<<<<<<<
- * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6
- * )
- */
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 77, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "fontTools/pens/momentsPen.py":78
- * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6
- * self.momentY += (
- * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 # <<<<<<<<<<<<<<
- * )
- * self.momentXX += (
- */
- __pyx_t_1 = PyFloat_FromDouble(((((((-__pyx_v_r0) * __pyx_v_y1) / 6.0) - ((__pyx_v_r8 * __pyx_v_x1) / 6.0)) - ((__pyx_v_r9 * __pyx_v_x1) / 6.0)) + ((__pyx_v_x0 * ((__pyx_v_r8 + __pyx_v_r9) + (__pyx_v_y0 * __pyx_v_y1))) / 6.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 78, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":77
- * self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2
- * self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6
- * self.momentY += ( # <<<<<<<<<<<<<<
- * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6
- * )
- */
- __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 77, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_t_2) < 0) __PYX_ERR(0, 77, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":80
- * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6
- * )
- * self.momentXX += ( # <<<<<<<<<<<<<<
- * -r10 * y0 / 12
- * - r10 * y1 / 4
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 80, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "fontTools/pens/momentsPen.py":85
- * - r2 * r5 / 12
- * - r4 * r6 * x1 / 12
- * + x0**3 * (3 * y0 + y1) / 12 # <<<<<<<<<<<<<<
- * )
- * self.momentXY += (
- */
- __pyx_t_1 = PyFloat_FromDouble((((((((-__pyx_v_r10) * __pyx_v_y0) / 12.0) - ((__pyx_v_r10 * __pyx_v_y1) / 4.0)) - ((__pyx_v_r2 * __pyx_v_r5) / 12.0)) - (((__pyx_v_r4 * __pyx_v_r6) * __pyx_v_x1) / 12.0)) + ((pow(__pyx_v_x0, 3.0) * ((3.0 * __pyx_v_y0) + __pyx_v_y1)) / 12.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 85, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":80
- * -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6
- * )
- * self.momentXX += ( # <<<<<<<<<<<<<<
- * -r10 * y0 / 12
- * - r10 * y1 / 4
- */
- __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 80, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_t_3) < 0) __PYX_ERR(0, 80, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "fontTools/pens/momentsPen.py":87
- * + x0**3 * (3 * y0 + y1) / 12
- * )
- * self.momentXY += ( # <<<<<<<<<<<<<<
- * -r2 * r8 / 24
- * - r2 * r9 / 8
- */
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 87, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "fontTools/pens/momentsPen.py":92
- * - r3 * r7 / 24
- * + r6 * (r7 * y1 + 3 * r8 + r9) / 24
- * - x0 * x1 * (r8 - r9) / 12 # <<<<<<<<<<<<<<
- * )
- * self.momentYY += (
- */
- __pyx_t_1 = PyFloat_FromDouble((((((((-__pyx_v_r2) * __pyx_v_r8) / 24.0) - ((__pyx_v_r2 * __pyx_v_r9) / 8.0)) - ((__pyx_v_r3 * __pyx_v_r7) / 24.0)) + ((__pyx_v_r6 * (((__pyx_v_r7 * __pyx_v_y1) + (3.0 * __pyx_v_r8)) + __pyx_v_r9)) / 24.0)) - (((__pyx_v_x0 * __pyx_v_x1) * (__pyx_v_r8 - __pyx_v_r9)) / 12.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 92, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":87
- * + x0**3 * (3 * y0 + y1) / 12
- * )
- * self.momentXY += ( # <<<<<<<<<<<<<<
- * -r2 * r8 / 24
- * - r2 * r9 / 8
- */
- __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 87, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_t_2) < 0) __PYX_ERR(0, 87, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":94
- * - x0 * x1 * (r8 - r9) / 12
- * )
- * self.momentYY += ( # <<<<<<<<<<<<<<
- * -r0 * r9 / 12
- * - r1 * r8 / 12
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentYY); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 94, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "fontTools/pens/momentsPen.py":99
- * - r11 * x1 / 12
- * - r12 * x1 / 12
- * + x0 * (r11 + r12 + r8 * y1 + r9 * y0) / 12 # <<<<<<<<<<<<<<
- * )
- *
- */
- __pyx_t_1 = PyFloat_FromDouble((((((((-__pyx_v_r0) * __pyx_v_r9) / 12.0) - ((__pyx_v_r1 * __pyx_v_r8) / 12.0)) - ((__pyx_v_r11 * __pyx_v_x1) / 12.0)) - ((__pyx_v_r12 * __pyx_v_x1) / 12.0)) + ((__pyx_v_x0 * (((__pyx_v_r11 + __pyx_v_r12) + (__pyx_v_r8 * __pyx_v_y1)) + (__pyx_v_r9 * __pyx_v_y0))) / 12.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 99, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":94
- * - x0 * x1 * (r8 - r9) / 12
- * )
- * self.momentYY += ( # <<<<<<<<<<<<<<
- * -r0 * r9 / 12
- * - r1 * r8 / 12
- */
- __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 94, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_t_3) < 0) __PYX_ERR(0, 94, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "fontTools/pens/momentsPen.py":57
- * @cython.locals(x0=cython.double, y0=cython.double)
- * @cython.locals(x1=cython.double, y1=cython.double)
- * def _lineTo(self, p1): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._lineTo", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "fontTools/pens/momentsPen.py":159
- * @cython.locals(x1=cython.double, y1=cython.double)
- * @cython.locals(x2=cython.double, y2=cython.double)
- * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne[] = "MomentsPen._qCurveToOne(self, p1, p2)";
-static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne = {"_qCurveToOne", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne};
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_self = 0;
- PyObject *__pyx_v_p1 = 0;
- PyObject *__pyx_v_p2 = 0;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("_qCurveToOne (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p1,&__pyx_n_s_p2,0};
- PyObject* values[3] = {0,0,0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p1)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("_qCurveToOne", 1, 3, 3, 1); __PYX_ERR(0, 159, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p2)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("_qCurveToOne", 1, 3, 3, 2); __PYX_ERR(0, 159, __pyx_L3_error)
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_qCurveToOne") < 0)) __PYX_ERR(0, 159, __pyx_L3_error)
- }
- } else if (PyTuple_GET_SIZE(__pyx_args) != 3) {
- goto __pyx_L5_argtuple_error;
- } else {
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- }
- __pyx_v_self = values[0];
- __pyx_v_p1 = values[1];
- __pyx_v_p2 = values[2];
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("_qCurveToOne", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 159, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._qCurveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return NULL;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne(__pyx_self, __pyx_v_self, __pyx_v_p1, __pyx_v_p2);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_10_qCurveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2) {
- double __pyx_v_x2;
- double __pyx_v_y2;
- double __pyx_v_x1;
- double __pyx_v_y1;
- double __pyx_v_x0;
- double __pyx_v_y0;
- double __pyx_v_r53;
- double __pyx_v_r52;
- double __pyx_v_r51;
- double __pyx_v_r50;
- double __pyx_v_r49;
- double __pyx_v_r48;
- double __pyx_v_r47;
- double __pyx_v_r46;
- double __pyx_v_r45;
- double __pyx_v_r44;
- double __pyx_v_r43;
- double __pyx_v_r42;
- double __pyx_v_r41;
- double __pyx_v_r40;
- double __pyx_v_r39;
- double __pyx_v_r38;
- double __pyx_v_r37;
- double __pyx_v_r36;
- double __pyx_v_r35;
- double __pyx_v_r34;
- double __pyx_v_r33;
- double __pyx_v_r32;
- double __pyx_v_r31;
- double __pyx_v_r30;
- double __pyx_v_r29;
- double __pyx_v_r28;
- double __pyx_v_r27;
- double __pyx_v_r26;
- double __pyx_v_r25;
- double __pyx_v_r24;
- double __pyx_v_r23;
- double __pyx_v_r22;
- double __pyx_v_r21;
- double __pyx_v_r20;
- double __pyx_v_r19;
- double __pyx_v_r18;
- double __pyx_v_r17;
- double __pyx_v_r16;
- double __pyx_v_r15;
- double __pyx_v_r14;
- double __pyx_v_r13;
- double __pyx_v_r12;
- double __pyx_v_r11;
- double __pyx_v_r10;
- double __pyx_v_r9;
- double __pyx_v_r8;
- double __pyx_v_r7;
- double __pyx_v_r6;
- double __pyx_v_r5;
- double __pyx_v_r4;
- double __pyx_v_r3;
- double __pyx_v_r2;
- double __pyx_v_r1;
- double __pyx_v_r0;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *(*__pyx_t_5)(PyObject *);
- double __pyx_t_6;
- double __pyx_t_7;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("_qCurveToOne", 0);
-
- /* "fontTools/pens/momentsPen.py":160
- * @cython.locals(x2=cython.double, y2=cython.double)
- * def _qCurveToOne(self, p1, p2):
- * x0, y0 = self._getCurrentPoint() # <<<<<<<<<<<<<<
- * x1, y1 = p1
- * x2, y2 = p2
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 160, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = NULL;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
- __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
- if (likely(__pyx_t_3)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_2, function);
- }
- }
- __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 160, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) {
- PyObject* sequence = __pyx_t_1;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(0, 160, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- if (likely(PyTuple_CheckExact(sequence))) {
- __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1);
- } else {
- __pyx_t_2 = PyList_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyList_GET_ITEM(sequence, 1);
- }
- __Pyx_INCREF(__pyx_t_2);
- __Pyx_INCREF(__pyx_t_3);
- #else
- __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 160, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 160, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- #endif
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- } else {
- Py_ssize_t index = -1;
- __pyx_t_4 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 160, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_5 = Py_TYPE(__pyx_t_4)->tp_iternext;
- index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_2);
- index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_3);
- if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 160, __pyx_L1_error)
- __pyx_t_5 = NULL;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- goto __pyx_L4_unpacking_done;
- __pyx_L3_unpacking_failed:;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __pyx_t_5 = NULL;
- if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);
- __PYX_ERR(0, 160, __pyx_L1_error)
- __pyx_L4_unpacking_done:;
- }
- __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_2); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 160, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 160, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_x0 = __pyx_t_6;
- __pyx_v_y0 = __pyx_t_7;
-
- /* "fontTools/pens/momentsPen.py":161
- * def _qCurveToOne(self, p1, p2):
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1 # <<<<<<<<<<<<<<
- * x2, y2 = p2
- *
- */
- if ((likely(PyTuple_CheckExact(__pyx_v_p1))) || (PyList_CheckExact(__pyx_v_p1))) {
- PyObject* sequence = __pyx_v_p1;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(0, 161, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- if (likely(PyTuple_CheckExact(sequence))) {
- __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1);
- } else {
- __pyx_t_1 = PyList_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyList_GET_ITEM(sequence, 1);
- }
- __Pyx_INCREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_t_3);
- #else
- __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 161, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 161, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- #endif
- } else {
- Py_ssize_t index = -1;
- __pyx_t_2 = PyObject_GetIter(__pyx_v_p1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 161, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext;
- index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_1);
- index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_3);
- if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 161, __pyx_L1_error)
- __pyx_t_5 = NULL;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- goto __pyx_L6_unpacking_done;
- __pyx_L5_unpacking_failed:;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_5 = NULL;
- if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);
- __PYX_ERR(0, 161, __pyx_L1_error)
- __pyx_L6_unpacking_done:;
- }
- __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 161, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 161, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_x1 = __pyx_t_7;
- __pyx_v_y1 = __pyx_t_6;
-
- /* "fontTools/pens/momentsPen.py":162
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- * x2, y2 = p2 # <<<<<<<<<<<<<<
- *
- * r0 = 2 * y1
- */
- if ((likely(PyTuple_CheckExact(__pyx_v_p2))) || (PyList_CheckExact(__pyx_v_p2))) {
- PyObject* sequence = __pyx_v_p2;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(0, 162, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- if (likely(PyTuple_CheckExact(sequence))) {
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1);
- } else {
- __pyx_t_3 = PyList_GET_ITEM(sequence, 0);
- __pyx_t_1 = PyList_GET_ITEM(sequence, 1);
- }
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_t_1);
- #else
- __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 162, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 162, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- #endif
- } else {
- Py_ssize_t index = -1;
- __pyx_t_2 = PyObject_GetIter(__pyx_v_p2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 162, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext;
- index = 0; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L7_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_3);
- index = 1; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_1);
- if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 162, __pyx_L1_error)
- __pyx_t_5 = NULL;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- goto __pyx_L8_unpacking_done;
- __pyx_L7_unpacking_failed:;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_5 = NULL;
- if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);
- __PYX_ERR(0, 162, __pyx_L1_error)
- __pyx_L8_unpacking_done:;
- }
- __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 162, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 162, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_v_x2 = __pyx_t_6;
- __pyx_v_y2 = __pyx_t_7;
-
- /* "fontTools/pens/momentsPen.py":164
- * x2, y2 = p2
- *
- * r0 = 2 * y1 # <<<<<<<<<<<<<<
- * r1 = r0 * x2
- * r2 = x2 * y2
- */
- __pyx_v_r0 = (2.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":165
- *
- * r0 = 2 * y1
- * r1 = r0 * x2 # <<<<<<<<<<<<<<
- * r2 = x2 * y2
- * r3 = 3 * r2
- */
- __pyx_v_r1 = (__pyx_v_r0 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":166
- * r0 = 2 * y1
- * r1 = r0 * x2
- * r2 = x2 * y2 # <<<<<<<<<<<<<<
- * r3 = 3 * r2
- * r4 = 2 * x1
- */
- __pyx_v_r2 = (__pyx_v_x2 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":167
- * r1 = r0 * x2
- * r2 = x2 * y2
- * r3 = 3 * r2 # <<<<<<<<<<<<<<
- * r4 = 2 * x1
- * r5 = 3 * y0
- */
- __pyx_v_r3 = (3.0 * __pyx_v_r2);
-
- /* "fontTools/pens/momentsPen.py":168
- * r2 = x2 * y2
- * r3 = 3 * r2
- * r4 = 2 * x1 # <<<<<<<<<<<<<<
- * r5 = 3 * y0
- * r6 = x1**2
- */
- __pyx_v_r4 = (2.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":169
- * r3 = 3 * r2
- * r4 = 2 * x1
- * r5 = 3 * y0 # <<<<<<<<<<<<<<
- * r6 = x1**2
- * r7 = x2**2
- */
- __pyx_v_r5 = (3.0 * __pyx_v_y0);
-
- /* "fontTools/pens/momentsPen.py":170
- * r4 = 2 * x1
- * r5 = 3 * y0
- * r6 = x1**2 # <<<<<<<<<<<<<<
- * r7 = x2**2
- * r8 = 4 * y1
- */
- __pyx_v_r6 = pow(__pyx_v_x1, 2.0);
-
- /* "fontTools/pens/momentsPen.py":171
- * r5 = 3 * y0
- * r6 = x1**2
- * r7 = x2**2 # <<<<<<<<<<<<<<
- * r8 = 4 * y1
- * r9 = 10 * y2
- */
- __pyx_v_r7 = pow(__pyx_v_x2, 2.0);
-
- /* "fontTools/pens/momentsPen.py":172
- * r6 = x1**2
- * r7 = x2**2
- * r8 = 4 * y1 # <<<<<<<<<<<<<<
- * r9 = 10 * y2
- * r10 = 2 * y2
- */
- __pyx_v_r8 = (4.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":173
- * r7 = x2**2
- * r8 = 4 * y1
- * r9 = 10 * y2 # <<<<<<<<<<<<<<
- * r10 = 2 * y2
- * r11 = r4 * x2
- */
- __pyx_v_r9 = (10.0 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":174
- * r8 = 4 * y1
- * r9 = 10 * y2
- * r10 = 2 * y2 # <<<<<<<<<<<<<<
- * r11 = r4 * x2
- * r12 = x0**2
- */
- __pyx_v_r10 = (2.0 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":175
- * r9 = 10 * y2
- * r10 = 2 * y2
- * r11 = r4 * x2 # <<<<<<<<<<<<<<
- * r12 = x0**2
- * r13 = 10 * y0
- */
- __pyx_v_r11 = (__pyx_v_r4 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":176
- * r10 = 2 * y2
- * r11 = r4 * x2
- * r12 = x0**2 # <<<<<<<<<<<<<<
- * r13 = 10 * y0
- * r14 = r4 * y2
- */
- __pyx_v_r12 = pow(__pyx_v_x0, 2.0);
-
- /* "fontTools/pens/momentsPen.py":177
- * r11 = r4 * x2
- * r12 = x0**2
- * r13 = 10 * y0 # <<<<<<<<<<<<<<
- * r14 = r4 * y2
- * r15 = x2 * y0
- */
- __pyx_v_r13 = (10.0 * __pyx_v_y0);
-
- /* "fontTools/pens/momentsPen.py":178
- * r12 = x0**2
- * r13 = 10 * y0
- * r14 = r4 * y2 # <<<<<<<<<<<<<<
- * r15 = x2 * y0
- * r16 = 4 * x1
- */
- __pyx_v_r14 = (__pyx_v_r4 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":179
- * r13 = 10 * y0
- * r14 = r4 * y2
- * r15 = x2 * y0 # <<<<<<<<<<<<<<
- * r16 = 4 * x1
- * r17 = r0 * x1 + r2
- */
- __pyx_v_r15 = (__pyx_v_x2 * __pyx_v_y0);
-
- /* "fontTools/pens/momentsPen.py":180
- * r14 = r4 * y2
- * r15 = x2 * y0
- * r16 = 4 * x1 # <<<<<<<<<<<<<<
- * r17 = r0 * x1 + r2
- * r18 = r2 * r8
- */
- __pyx_v_r16 = (4.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":181
- * r15 = x2 * y0
- * r16 = 4 * x1
- * r17 = r0 * x1 + r2 # <<<<<<<<<<<<<<
- * r18 = r2 * r8
- * r19 = y1**2
- */
- __pyx_v_r17 = ((__pyx_v_r0 * __pyx_v_x1) + __pyx_v_r2);
-
- /* "fontTools/pens/momentsPen.py":182
- * r16 = 4 * x1
- * r17 = r0 * x1 + r2
- * r18 = r2 * r8 # <<<<<<<<<<<<<<
- * r19 = y1**2
- * r20 = 2 * r19
- */
- __pyx_v_r18 = (__pyx_v_r2 * __pyx_v_r8);
-
- /* "fontTools/pens/momentsPen.py":183
- * r17 = r0 * x1 + r2
- * r18 = r2 * r8
- * r19 = y1**2 # <<<<<<<<<<<<<<
- * r20 = 2 * r19
- * r21 = y2**2
- */
- __pyx_v_r19 = pow(__pyx_v_y1, 2.0);
-
- /* "fontTools/pens/momentsPen.py":184
- * r18 = r2 * r8
- * r19 = y1**2
- * r20 = 2 * r19 # <<<<<<<<<<<<<<
- * r21 = y2**2
- * r22 = r21 * x2
- */
- __pyx_v_r20 = (2.0 * __pyx_v_r19);
-
- /* "fontTools/pens/momentsPen.py":185
- * r19 = y1**2
- * r20 = 2 * r19
- * r21 = y2**2 # <<<<<<<<<<<<<<
- * r22 = r21 * x2
- * r23 = 5 * r22
- */
- __pyx_v_r21 = pow(__pyx_v_y2, 2.0);
-
- /* "fontTools/pens/momentsPen.py":186
- * r20 = 2 * r19
- * r21 = y2**2
- * r22 = r21 * x2 # <<<<<<<<<<<<<<
- * r23 = 5 * r22
- * r24 = y0**2
- */
- __pyx_v_r22 = (__pyx_v_r21 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":187
- * r21 = y2**2
- * r22 = r21 * x2
- * r23 = 5 * r22 # <<<<<<<<<<<<<<
- * r24 = y0**2
- * r25 = y0 * y2
- */
- __pyx_v_r23 = (5.0 * __pyx_v_r22);
-
- /* "fontTools/pens/momentsPen.py":188
- * r22 = r21 * x2
- * r23 = 5 * r22
- * r24 = y0**2 # <<<<<<<<<<<<<<
- * r25 = y0 * y2
- * r26 = 5 * r24
- */
- __pyx_v_r24 = pow(__pyx_v_y0, 2.0);
-
- /* "fontTools/pens/momentsPen.py":189
- * r23 = 5 * r22
- * r24 = y0**2
- * r25 = y0 * y2 # <<<<<<<<<<<<<<
- * r26 = 5 * r24
- * r27 = x1**3
- */
- __pyx_v_r25 = (__pyx_v_y0 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":190
- * r24 = y0**2
- * r25 = y0 * y2
- * r26 = 5 * r24 # <<<<<<<<<<<<<<
- * r27 = x1**3
- * r28 = x2**3
- */
- __pyx_v_r26 = (5.0 * __pyx_v_r24);
-
- /* "fontTools/pens/momentsPen.py":191
- * r25 = y0 * y2
- * r26 = 5 * r24
- * r27 = x1**3 # <<<<<<<<<<<<<<
- * r28 = x2**3
- * r29 = 30 * y1
- */
- __pyx_v_r27 = pow(__pyx_v_x1, 3.0);
-
- /* "fontTools/pens/momentsPen.py":192
- * r26 = 5 * r24
- * r27 = x1**3
- * r28 = x2**3 # <<<<<<<<<<<<<<
- * r29 = 30 * y1
- * r30 = 6 * y1
- */
- __pyx_v_r28 = pow(__pyx_v_x2, 3.0);
-
- /* "fontTools/pens/momentsPen.py":193
- * r27 = x1**3
- * r28 = x2**3
- * r29 = 30 * y1 # <<<<<<<<<<<<<<
- * r30 = 6 * y1
- * r31 = 10 * r7 * x1
- */
- __pyx_v_r29 = (30.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":194
- * r28 = x2**3
- * r29 = 30 * y1
- * r30 = 6 * y1 # <<<<<<<<<<<<<<
- * r31 = 10 * r7 * x1
- * r32 = 5 * y2
- */
- __pyx_v_r30 = (6.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":195
- * r29 = 30 * y1
- * r30 = 6 * y1
- * r31 = 10 * r7 * x1 # <<<<<<<<<<<<<<
- * r32 = 5 * y2
- * r33 = 12 * r6
- */
- __pyx_v_r31 = ((10.0 * __pyx_v_r7) * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":196
- * r30 = 6 * y1
- * r31 = 10 * r7 * x1
- * r32 = 5 * y2 # <<<<<<<<<<<<<<
- * r33 = 12 * r6
- * r34 = 30 * x1
- */
- __pyx_v_r32 = (5.0 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":197
- * r31 = 10 * r7 * x1
- * r32 = 5 * y2
- * r33 = 12 * r6 # <<<<<<<<<<<<<<
- * r34 = 30 * x1
- * r35 = x1 * y1
- */
- __pyx_v_r33 = (12.0 * __pyx_v_r6);
-
- /* "fontTools/pens/momentsPen.py":198
- * r32 = 5 * y2
- * r33 = 12 * r6
- * r34 = 30 * x1 # <<<<<<<<<<<<<<
- * r35 = x1 * y1
- * r36 = r3 + 20 * r35
- */
- __pyx_v_r34 = (30.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":199
- * r33 = 12 * r6
- * r34 = 30 * x1
- * r35 = x1 * y1 # <<<<<<<<<<<<<<
- * r36 = r3 + 20 * r35
- * r37 = 12 * x1
- */
- __pyx_v_r35 = (__pyx_v_x1 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":200
- * r34 = 30 * x1
- * r35 = x1 * y1
- * r36 = r3 + 20 * r35 # <<<<<<<<<<<<<<
- * r37 = 12 * x1
- * r38 = 20 * r6
- */
- __pyx_v_r36 = (__pyx_v_r3 + (20.0 * __pyx_v_r35));
-
- /* "fontTools/pens/momentsPen.py":201
- * r35 = x1 * y1
- * r36 = r3 + 20 * r35
- * r37 = 12 * x1 # <<<<<<<<<<<<<<
- * r38 = 20 * r6
- * r39 = 8 * r6 * y1
- */
- __pyx_v_r37 = (12.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":202
- * r36 = r3 + 20 * r35
- * r37 = 12 * x1
- * r38 = 20 * r6 # <<<<<<<<<<<<<<
- * r39 = 8 * r6 * y1
- * r40 = r32 * r7
- */
- __pyx_v_r38 = (20.0 * __pyx_v_r6);
-
- /* "fontTools/pens/momentsPen.py":203
- * r37 = 12 * x1
- * r38 = 20 * r6
- * r39 = 8 * r6 * y1 # <<<<<<<<<<<<<<
- * r40 = r32 * r7
- * r41 = 60 * y1
- */
- __pyx_v_r39 = ((8.0 * __pyx_v_r6) * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":204
- * r38 = 20 * r6
- * r39 = 8 * r6 * y1
- * r40 = r32 * r7 # <<<<<<<<<<<<<<
- * r41 = 60 * y1
- * r42 = 20 * r19
- */
- __pyx_v_r40 = (__pyx_v_r32 * __pyx_v_r7);
-
- /* "fontTools/pens/momentsPen.py":205
- * r39 = 8 * r6 * y1
- * r40 = r32 * r7
- * r41 = 60 * y1 # <<<<<<<<<<<<<<
- * r42 = 20 * r19
- * r43 = 4 * r19
- */
- __pyx_v_r41 = (60.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":206
- * r40 = r32 * r7
- * r41 = 60 * y1
- * r42 = 20 * r19 # <<<<<<<<<<<<<<
- * r43 = 4 * r19
- * r44 = 15 * r21
- */
- __pyx_v_r42 = (20.0 * __pyx_v_r19);
-
- /* "fontTools/pens/momentsPen.py":207
- * r41 = 60 * y1
- * r42 = 20 * r19
- * r43 = 4 * r19 # <<<<<<<<<<<<<<
- * r44 = 15 * r21
- * r45 = 12 * x2
- */
- __pyx_v_r43 = (4.0 * __pyx_v_r19);
-
- /* "fontTools/pens/momentsPen.py":208
- * r42 = 20 * r19
- * r43 = 4 * r19
- * r44 = 15 * r21 # <<<<<<<<<<<<<<
- * r45 = 12 * x2
- * r46 = 12 * y2
- */
- __pyx_v_r44 = (15.0 * __pyx_v_r21);
-
- /* "fontTools/pens/momentsPen.py":209
- * r43 = 4 * r19
- * r44 = 15 * r21
- * r45 = 12 * x2 # <<<<<<<<<<<<<<
- * r46 = 12 * y2
- * r47 = 6 * x1
- */
- __pyx_v_r45 = (12.0 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":210
- * r44 = 15 * r21
- * r45 = 12 * x2
- * r46 = 12 * y2 # <<<<<<<<<<<<<<
- * r47 = 6 * x1
- * r48 = 8 * r19 * x1 + r23
- */
- __pyx_v_r46 = (12.0 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":211
- * r45 = 12 * x2
- * r46 = 12 * y2
- * r47 = 6 * x1 # <<<<<<<<<<<<<<
- * r48 = 8 * r19 * x1 + r23
- * r49 = 8 * y1**3
- */
- __pyx_v_r47 = (6.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":212
- * r46 = 12 * y2
- * r47 = 6 * x1
- * r48 = 8 * r19 * x1 + r23 # <<<<<<<<<<<<<<
- * r49 = 8 * y1**3
- * r50 = y2**3
- */
- __pyx_v_r48 = (((8.0 * __pyx_v_r19) * __pyx_v_x1) + __pyx_v_r23);
-
- /* "fontTools/pens/momentsPen.py":213
- * r47 = 6 * x1
- * r48 = 8 * r19 * x1 + r23
- * r49 = 8 * y1**3 # <<<<<<<<<<<<<<
- * r50 = y2**3
- * r51 = y0**3
- */
- __pyx_v_r49 = (8.0 * pow(__pyx_v_y1, 3.0));
-
- /* "fontTools/pens/momentsPen.py":214
- * r48 = 8 * r19 * x1 + r23
- * r49 = 8 * y1**3
- * r50 = y2**3 # <<<<<<<<<<<<<<
- * r51 = y0**3
- * r52 = 10 * y1
- */
- __pyx_v_r50 = pow(__pyx_v_y2, 3.0);
-
- /* "fontTools/pens/momentsPen.py":215
- * r49 = 8 * y1**3
- * r50 = y2**3
- * r51 = y0**3 # <<<<<<<<<<<<<<
- * r52 = 10 * y1
- * r53 = 12 * y1
- */
- __pyx_v_r51 = pow(__pyx_v_y0, 3.0);
-
- /* "fontTools/pens/momentsPen.py":216
- * r50 = y2**3
- * r51 = y0**3
- * r52 = 10 * y1 # <<<<<<<<<<<<<<
- * r53 = 12 * y1
- *
- */
- __pyx_v_r52 = (10.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":217
- * r51 = y0**3
- * r52 = 10 * y1
- * r53 = 12 * y1 # <<<<<<<<<<<<<<
- *
- * self.area += (
- */
- __pyx_v_r53 = (12.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":219
- * r53 = 12 * y1
- *
- * self.area += ( # <<<<<<<<<<<<<<
- * -r1 / 6
- * - r3 / 6
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_area); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 219, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":224
- * + x0 * (r0 + r5 + y2) / 6
- * + x1 * y2 / 3
- * - y0 * (r4 + x2) / 6 # <<<<<<<<<<<<<<
- * )
- * self.momentX += (
- */
- __pyx_t_3 = PyFloat_FromDouble(((((((-__pyx_v_r1) / 6.0) - (__pyx_v_r3 / 6.0)) + ((__pyx_v_x0 * ((__pyx_v_r0 + __pyx_v_r5) + __pyx_v_y2)) / 6.0)) + ((__pyx_v_x1 * __pyx_v_y2) / 3.0)) - ((__pyx_v_y0 * (__pyx_v_r4 + __pyx_v_x2)) / 6.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 224, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "fontTools/pens/momentsPen.py":219
- * r53 = 12 * y1
- *
- * self.area += ( # <<<<<<<<<<<<<<
- * -r1 / 6
- * - r3 / 6
- */
- __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 219, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_t_2) < 0) __PYX_ERR(0, 219, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":226
- * - y0 * (r4 + x2) / 6
- * )
- * self.momentX += ( # <<<<<<<<<<<<<<
- * -r11 * (-r10 + y1) / 30
- * + r12 * (r13 + r8 + y2) / 30
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 226, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "fontTools/pens/momentsPen.py":233
- * - r7 * r9 / 30
- * + x0 * (r14 - r15 - r16 * y0 + r17) / 30
- * - y0 * (r11 + 2 * r6 + r7) / 30 # <<<<<<<<<<<<<<
- * )
- * self.momentY += (
- */
- __pyx_t_3 = PyFloat_FromDouble((((((((((-__pyx_v_r11) * ((-__pyx_v_r10) + __pyx_v_y1)) / 30.0) + ((__pyx_v_r12 * ((__pyx_v_r13 + __pyx_v_r8) + __pyx_v_y2)) / 30.0)) + ((__pyx_v_r6 * __pyx_v_y2) / 15.0)) - ((__pyx_v_r7 * __pyx_v_r8) / 30.0)) - ((__pyx_v_r7 * __pyx_v_r9) / 30.0)) + ((__pyx_v_x0 * (((__pyx_v_r14 - __pyx_v_r15) - (__pyx_v_r16 * __pyx_v_y0)) + __pyx_v_r17)) / 30.0)) - ((__pyx_v_y0 * ((__pyx_v_r11 + (2.0 * __pyx_v_r6)) + __pyx_v_r7)) / 30.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 233, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "fontTools/pens/momentsPen.py":226
- * - y0 * (r4 + x2) / 6
- * )
- * self.momentX += ( # <<<<<<<<<<<<<<
- * -r11 * (-r10 + y1) / 30
- * + r12 * (r13 + r8 + y2) / 30
- */
- __pyx_t_1 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 226, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_t_1) < 0) __PYX_ERR(0, 226, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "fontTools/pens/momentsPen.py":235
- * - y0 * (r11 + 2 * r6 + r7) / 30
- * )
- * self.momentY += ( # <<<<<<<<<<<<<<
- * -r18 / 30
- * - r20 * x2 / 30
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentY); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 235, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":242
- * + x0 * (r0 * y2 + r20 + r21 + r25 + r26 + r8 * y0) / 30
- * + x1 * y2 * (r10 + y1) / 15
- * - y0 * (r1 + r17) / 30 # <<<<<<<<<<<<<<
- * )
- * self.momentXX += (
- */
- __pyx_t_3 = PyFloat_FromDouble(((((((((-__pyx_v_r18) / 30.0) - ((__pyx_v_r20 * __pyx_v_x2) / 30.0)) - (__pyx_v_r23 / 30.0)) - ((__pyx_v_r24 * (__pyx_v_r16 + __pyx_v_x2)) / 30.0)) + ((__pyx_v_x0 * ((((((__pyx_v_r0 * __pyx_v_y2) + __pyx_v_r20) + __pyx_v_r21) + __pyx_v_r25) + __pyx_v_r26) + (__pyx_v_r8 * __pyx_v_y0))) / 30.0)) + (((__pyx_v_x1 * __pyx_v_y2) * (__pyx_v_r10 + __pyx_v_y1)) / 15.0)) - ((__pyx_v_y0 * (__pyx_v_r1 + __pyx_v_r17)) / 30.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 242, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "fontTools/pens/momentsPen.py":235
- * - y0 * (r11 + 2 * r6 + r7) / 30
- * )
- * self.momentY += ( # <<<<<<<<<<<<<<
- * -r18 / 30
- * - r20 * x2 / 30
- */
- __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 235, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_t_2) < 0) __PYX_ERR(0, 235, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":244
- * - y0 * (r1 + r17) / 30
- * )
- * self.momentXX += ( # <<<<<<<<<<<<<<
- * r12 * (r1 - 5 * r15 - r34 * y0 + r36 + r9 * x1) / 420
- * + 2 * r27 * y2 / 105
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 244, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "fontTools/pens/momentsPen.py":264
- * )
- * / 420
- * - y0 * (8 * r27 + 5 * r28 + r31 + r33 * x2) / 420 # <<<<<<<<<<<<<<
- * )
- * self.momentXY += (
- */
- __pyx_t_3 = PyFloat_FromDouble(((((((((((__pyx_v_r12 * ((((__pyx_v_r1 - (5.0 * __pyx_v_r15)) - (__pyx_v_r34 * __pyx_v_y0)) + __pyx_v_r36) + (__pyx_v_r9 * __pyx_v_x1))) / 420.0) + (((2.0 * __pyx_v_r27) * __pyx_v_y2) / 105.0)) - ((__pyx_v_r28 * __pyx_v_r29) / 420.0)) - ((__pyx_v_r28 * __pyx_v_y2) / 4.0)) - ((__pyx_v_r31 * (__pyx_v_r0 - (3.0 * __pyx_v_y2))) / 420.0)) - (((__pyx_v_r6 * __pyx_v_x2) * (__pyx_v_r0 - __pyx_v_r32)) / 105.0)) + ((pow(__pyx_v_x0, 3.0) * ((__pyx_v_r30 + (21.0 * __pyx_v_y0)) + __pyx_v_y2)) / 84.0)) - ((__pyx_v_x0 * ((((((((__pyx_v_r0 * __pyx_v_r7) + (__pyx_v_r15 * __pyx_v_r37)) - (__pyx_v_r2 * __pyx_v_r37)) - (__pyx_v_r33 * __pyx_v_y2)) + (__pyx_v_r38 * __pyx_v_y0)) - __pyx_v_r39) - __pyx_v_r40) + (__pyx_v_r5 * __pyx_v_r7))) / 420.0)) - ((__pyx_v_y0 * ((((8.0 * __pyx_v_r27) + (5.0 * __pyx_v_r28)) + __pyx_v_r31) + (__pyx_v_r33 * __pyx_v_x2))) / 420.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 264, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "fontTools/pens/momentsPen.py":244
- * - y0 * (r1 + r17) / 30
- * )
- * self.momentXX += ( # <<<<<<<<<<<<<<
- * r12 * (r1 - 5 * r15 - r34 * y0 + r36 + r9 * x1) / 420
- * + 2 * r27 * y2 / 105
- */
- __pyx_t_1 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 244, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_t_1) < 0) __PYX_ERR(0, 244, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "fontTools/pens/momentsPen.py":266
- * - y0 * (8 * r27 + 5 * r28 + r31 + r33 * x2) / 420
- * )
- * self.momentXY += ( # <<<<<<<<<<<<<<
- * r12 * (r13 * y2 + 3 * r21 + 105 * r24 + r41 * y0 + r42 + r46 * y1) / 840
- * - r16 * x2 * (r43 - r44) / 840
- */
- __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXY); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 266, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":286
- * )
- * / 420
- * - y0 * (r16 * r2 + r30 * r7 + r35 * r45 + r39 + r40) / 420 # <<<<<<<<<<<<<<
- * )
- * self.momentYY += (
- */
- __pyx_t_3 = PyFloat_FromDouble(((((((((((__pyx_v_r12 * ((((((__pyx_v_r13 * __pyx_v_y2) + (3.0 * __pyx_v_r21)) + (105.0 * __pyx_v_r24)) + (__pyx_v_r41 * __pyx_v_y0)) + __pyx_v_r42) + (__pyx_v_r46 * __pyx_v_y1))) / 840.0) - (((__pyx_v_r16 * __pyx_v_x2) * (__pyx_v_r43 - __pyx_v_r44)) / 840.0)) - ((__pyx_v_r21 * __pyx_v_r7) / 8.0)) - ((__pyx_v_r24 * ((__pyx_v_r38 + (__pyx_v_r45 * __pyx_v_x1)) + (3.0 * __pyx_v_r7))) / 840.0)) - (((__pyx_v_r41 * __pyx_v_r7) * __pyx_v_y2) / 840.0)) - ((__pyx_v_r42 * __pyx_v_r7) / 840.0)) + (((__pyx_v_r6 * __pyx_v_y2) * (__pyx_v_r32 + __pyx_v_r8)) / 210.0)) + ((__pyx_v_x0 * (((((((((-__pyx_v_r15) * __pyx_v_r8) + (__pyx_v_r16 * __pyx_v_r25)) + __pyx_v_r18) + (__pyx_v_r21 * __pyx_v_r47)) - (__pyx_v_r24 * __pyx_v_r34)) - (__pyx_v_r26 * __pyx_v_x2)) + (__pyx_v_r35 * __pyx_v_r46)) + __pyx_v_r48)) / 420.0)) - ((__pyx_v_y0 * (((((__pyx_v_r16 * __pyx_v_r2) + (__pyx_v_r30 * __pyx_v_r7)) + (__pyx_v_r35 * __pyx_v_r45)) + __pyx_v_r39) + __pyx_v_r40)) / 420.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 286, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "fontTools/pens/momentsPen.py":266
- * - y0 * (8 * r27 + 5 * r28 + r31 + r33 * x2) / 420
- * )
- * self.momentXY += ( # <<<<<<<<<<<<<<
- * r12 * (r13 * y2 + 3 * r21 + 105 * r24 + r41 * y0 + r42 + r46 * y1) / 840
- * - r16 * x2 * (r43 - r44) / 840
- */
- __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_1, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 266, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_t_2) < 0) __PYX_ERR(0, 266, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":288
- * - y0 * (r16 * r2 + r30 * r7 + r35 * r45 + r39 + r40) / 420
- * )
- * self.momentYY += ( # <<<<<<<<<<<<<<
- * -r2 * r42 / 420
- * - r22 * r29 / 420
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentYY); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 288, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "fontTools/pens/momentsPen.py":310
- * / 420
- * + x1 * y2 * (r43 + r44 + r9 * y1) / 210
- * - y0 * (r19 * r45 + r2 * r53 - r21 * r4 + r48) / 420 # <<<<<<<<<<<<<<
- * )
- *
- */
- __pyx_t_3 = PyFloat_FromDouble((((((((((((-__pyx_v_r2) * __pyx_v_r42) / 420.0) - ((__pyx_v_r22 * __pyx_v_r29) / 420.0)) - ((__pyx_v_r24 * ((__pyx_v_r14 + __pyx_v_r36) + (__pyx_v_r52 * __pyx_v_x2))) / 420.0)) - ((__pyx_v_r49 * __pyx_v_x2) / 420.0)) - ((__pyx_v_r50 * __pyx_v_x2) / 12.0)) - ((__pyx_v_r51 * (__pyx_v_r47 + __pyx_v_x2)) / 84.0)) + ((__pyx_v_x0 * ((((((((((__pyx_v_r19 * __pyx_v_r46) + (__pyx_v_r21 * __pyx_v_r5)) + (__pyx_v_r21 * __pyx_v_r52)) + (__pyx_v_r24 * __pyx_v_r29)) + (__pyx_v_r25 * __pyx_v_r53)) + (__pyx_v_r26 * __pyx_v_y2)) + (__pyx_v_r42 * __pyx_v_y0)) + __pyx_v_r49) + (5.0 * __pyx_v_r50)) + (35.0 * __pyx_v_r51))) / 420.0)) + (((__pyx_v_x1 * __pyx_v_y2) * ((__pyx_v_r43 + __pyx_v_r44) + (__pyx_v_r9 * __pyx_v_y1))) / 210.0)) - ((__pyx_v_y0 * ((((__pyx_v_r19 * __pyx_v_r45) + (__pyx_v_r2 * __pyx_v_r53)) - (__pyx_v_r21 * __pyx_v_r4)) + __pyx_v_r48)) / 420.0))); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 310, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "fontTools/pens/momentsPen.py":288
- * - y0 * (r16 * r2 + r30 * r7 + r35 * r45 + r39 + r40) / 420
- * )
- * self.momentYY += ( # <<<<<<<<<<<<<<
- * -r2 * r42 / 420
- * - r22 * r29 / 420
- */
- __pyx_t_1 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 288, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_t_1) < 0) __PYX_ERR(0, 288, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "fontTools/pens/momentsPen.py":159
- * @cython.locals(x1=cython.double, y1=cython.double)
- * @cython.locals(x2=cython.double, y2=cython.double)
- * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._qCurveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-/* "fontTools/pens/momentsPen.py":450
- * @cython.locals(x2=cython.double, y2=cython.double)
- * @cython.locals(x3=cython.double, y3=cython.double)
- * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
-
-/* Python wrapper */
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
-static char __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne[] = "MomentsPen._curveToOne(self, p1, p2, p3)";
-static PyMethodDef __pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne = {"_curveToOne", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne, METH_VARARGS|METH_KEYWORDS, __pyx_doc_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne};
-static PyObject *__pyx_pw_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
- PyObject *__pyx_v_self = 0;
- PyObject *__pyx_v_p1 = 0;
- PyObject *__pyx_v_p2 = 0;
- PyObject *__pyx_v_p3 = 0;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- PyObject *__pyx_r = 0;
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("_curveToOne (wrapper)", 0);
- {
- static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_self,&__pyx_n_s_p1,&__pyx_n_s_p2,&__pyx_n_s_p3,0};
- PyObject* values[4] = {0,0,0,0};
- if (unlikely(__pyx_kwds)) {
- Py_ssize_t kw_args;
- const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
- switch (pos_args) {
- case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- CYTHON_FALLTHROUGH;
- case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- CYTHON_FALLTHROUGH;
- case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- CYTHON_FALLTHROUGH;
- case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- CYTHON_FALLTHROUGH;
- case 0: break;
- default: goto __pyx_L5_argtuple_error;
- }
- kw_args = PyDict_Size(__pyx_kwds);
- switch (pos_args) {
- case 0:
- if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_self)) != 0)) kw_args--;
- else goto __pyx_L5_argtuple_error;
- CYTHON_FALLTHROUGH;
- case 1:
- if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p1)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, 1); __PYX_ERR(0, 450, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 2:
- if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p2)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, 2); __PYX_ERR(0, 450, __pyx_L3_error)
- }
- CYTHON_FALLTHROUGH;
- case 3:
- if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_p3)) != 0)) kw_args--;
- else {
- __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, 3); __PYX_ERR(0, 450, __pyx_L3_error)
- }
- }
- if (unlikely(kw_args > 0)) {
- if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "_curveToOne") < 0)) __PYX_ERR(0, 450, __pyx_L3_error)
- }
- } else if (PyTuple_GET_SIZE(__pyx_args) != 4) {
- goto __pyx_L5_argtuple_error;
- } else {
- values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
- values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
- values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
- values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
- }
- __pyx_v_self = values[0];
- __pyx_v_p1 = values[1];
- __pyx_v_p2 = values[2];
- __pyx_v_p3 = values[3];
- }
- goto __pyx_L4_argument_unpacking_done;
- __pyx_L5_argtuple_error:;
- __Pyx_RaiseArgtupleInvalid("_curveToOne", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 450, __pyx_L3_error)
- __pyx_L3_error:;
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._curveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __Pyx_RefNannyFinishContext();
- return NULL;
- __pyx_L4_argument_unpacking_done:;
- __pyx_r = __pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne(__pyx_self, __pyx_v_self, __pyx_v_p1, __pyx_v_p2, __pyx_v_p3);
-
- /* function exit code */
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyObject *__pyx_pf_9fontTools_4pens_10momentsPen_10MomentsPen_12_curveToOne(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_self, PyObject *__pyx_v_p1, PyObject *__pyx_v_p2, PyObject *__pyx_v_p3) {
- double __pyx_v_x3;
- double __pyx_v_y3;
- double __pyx_v_x2;
- double __pyx_v_y2;
- double __pyx_v_x1;
- double __pyx_v_y1;
- double __pyx_v_x0;
- double __pyx_v_y0;
- double __pyx_v_r132;
- double __pyx_v_r131;
- double __pyx_v_r130;
- double __pyx_v_r129;
- double __pyx_v_r128;
- double __pyx_v_r127;
- double __pyx_v_r126;
- double __pyx_v_r125;
- double __pyx_v_r124;
- double __pyx_v_r123;
- double __pyx_v_r122;
- double __pyx_v_r121;
- double __pyx_v_r120;
- double __pyx_v_r119;
- double __pyx_v_r118;
- double __pyx_v_r117;
- double __pyx_v_r116;
- double __pyx_v_r115;
- double __pyx_v_r114;
- double __pyx_v_r113;
- double __pyx_v_r112;
- double __pyx_v_r111;
- double __pyx_v_r110;
- double __pyx_v_r109;
- double __pyx_v_r108;
- double __pyx_v_r107;
- double __pyx_v_r106;
- double __pyx_v_r105;
- double __pyx_v_r104;
- double __pyx_v_r103;
- double __pyx_v_r102;
- double __pyx_v_r101;
- double __pyx_v_r100;
- double __pyx_v_r99;
- double __pyx_v_r98;
- double __pyx_v_r97;
- double __pyx_v_r96;
- double __pyx_v_r95;
- double __pyx_v_r94;
- double __pyx_v_r93;
- double __pyx_v_r92;
- double __pyx_v_r91;
- double __pyx_v_r90;
- double __pyx_v_r89;
- double __pyx_v_r88;
- double __pyx_v_r87;
- double __pyx_v_r86;
- double __pyx_v_r85;
- double __pyx_v_r84;
- double __pyx_v_r83;
- double __pyx_v_r82;
- double __pyx_v_r81;
- double __pyx_v_r80;
- double __pyx_v_r79;
- double __pyx_v_r78;
- double __pyx_v_r77;
- double __pyx_v_r76;
- double __pyx_v_r75;
- double __pyx_v_r74;
- double __pyx_v_r73;
- double __pyx_v_r72;
- double __pyx_v_r71;
- double __pyx_v_r70;
- double __pyx_v_r69;
- double __pyx_v_r68;
- double __pyx_v_r67;
- double __pyx_v_r66;
- double __pyx_v_r65;
- double __pyx_v_r64;
- double __pyx_v_r63;
- double __pyx_v_r62;
- double __pyx_v_r61;
- double __pyx_v_r60;
- double __pyx_v_r59;
- double __pyx_v_r58;
- double __pyx_v_r57;
- double __pyx_v_r56;
- double __pyx_v_r55;
- double __pyx_v_r54;
- double __pyx_v_r53;
- double __pyx_v_r52;
- double __pyx_v_r51;
- double __pyx_v_r50;
- double __pyx_v_r49;
- double __pyx_v_r48;
- double __pyx_v_r47;
- double __pyx_v_r46;
- double __pyx_v_r45;
- double __pyx_v_r44;
- double __pyx_v_r43;
- double __pyx_v_r42;
- double __pyx_v_r41;
- double __pyx_v_r40;
- double __pyx_v_r39;
- double __pyx_v_r38;
- double __pyx_v_r37;
- double __pyx_v_r36;
- double __pyx_v_r35;
- double __pyx_v_r34;
- double __pyx_v_r33;
- double __pyx_v_r32;
- double __pyx_v_r31;
- double __pyx_v_r30;
- double __pyx_v_r29;
- double __pyx_v_r28;
- double __pyx_v_r27;
- double __pyx_v_r26;
- double __pyx_v_r25;
- double __pyx_v_r24;
- double __pyx_v_r23;
- double __pyx_v_r22;
- double __pyx_v_r21;
- double __pyx_v_r20;
- double __pyx_v_r19;
- double __pyx_v_r18;
- double __pyx_v_r17;
- double __pyx_v_r16;
- double __pyx_v_r15;
- double __pyx_v_r14;
- double __pyx_v_r13;
- double __pyx_v_r12;
- double __pyx_v_r11;
- double __pyx_v_r10;
- double __pyx_v_r9;
- double __pyx_v_r8;
- double __pyx_v_r7;
- double __pyx_v_r6;
- double __pyx_v_r5;
- double __pyx_v_r4;
- double __pyx_v_r3;
- double __pyx_v_r2;
- double __pyx_v_r1;
- double __pyx_v_r0;
- PyObject *__pyx_r = NULL;
- __Pyx_RefNannyDeclarations
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *(*__pyx_t_5)(PyObject *);
- double __pyx_t_6;
- double __pyx_t_7;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannySetupContext("_curveToOne", 0);
-
- /* "fontTools/pens/momentsPen.py":451
- * @cython.locals(x3=cython.double, y3=cython.double)
- * def _curveToOne(self, p1, p2, p3):
- * x0, y0 = self._getCurrentPoint() # <<<<<<<<<<<<<<
- * x1, y1 = p1
- * x2, y2 = p2
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_getCurrentPoint); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 451, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = NULL;
- if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
- __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
- if (likely(__pyx_t_3)) {
- PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_INCREF(function);
- __Pyx_DECREF_SET(__pyx_t_2, function);
- }
- }
- __pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 451, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) {
- PyObject* sequence = __pyx_t_1;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(0, 451, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- if (likely(PyTuple_CheckExact(sequence))) {
- __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1);
- } else {
- __pyx_t_2 = PyList_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyList_GET_ITEM(sequence, 1);
- }
- __Pyx_INCREF(__pyx_t_2);
- __Pyx_INCREF(__pyx_t_3);
- #else
- __pyx_t_2 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 451, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 451, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- #endif
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- } else {
- Py_ssize_t index = -1;
- __pyx_t_4 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 451, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_4);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_5 = Py_TYPE(__pyx_t_4)->tp_iternext;
- index = 0; __pyx_t_2 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_2)) goto __pyx_L3_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_2);
- index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_4); if (unlikely(!__pyx_t_3)) goto __pyx_L3_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_3);
- if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_4), 2) < 0) __PYX_ERR(0, 451, __pyx_L1_error)
- __pyx_t_5 = NULL;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- goto __pyx_L4_unpacking_done;
- __pyx_L3_unpacking_failed:;
- __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
- __pyx_t_5 = NULL;
- if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);
- __PYX_ERR(0, 451, __pyx_L1_error)
- __pyx_L4_unpacking_done:;
- }
- __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_2); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 451, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 451, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_x0 = __pyx_t_6;
- __pyx_v_y0 = __pyx_t_7;
-
- /* "fontTools/pens/momentsPen.py":452
- * def _curveToOne(self, p1, p2, p3):
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1 # <<<<<<<<<<<<<<
- * x2, y2 = p2
- * x3, y3 = p3
- */
- if ((likely(PyTuple_CheckExact(__pyx_v_p1))) || (PyList_CheckExact(__pyx_v_p1))) {
- PyObject* sequence = __pyx_v_p1;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(0, 452, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- if (likely(PyTuple_CheckExact(sequence))) {
- __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1);
- } else {
- __pyx_t_1 = PyList_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyList_GET_ITEM(sequence, 1);
- }
- __Pyx_INCREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_t_3);
- #else
- __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 452, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 452, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- #endif
- } else {
- Py_ssize_t index = -1;
- __pyx_t_2 = PyObject_GetIter(__pyx_v_p1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 452, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext;
- index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L5_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_1);
- index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L5_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_3);
- if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 452, __pyx_L1_error)
- __pyx_t_5 = NULL;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- goto __pyx_L6_unpacking_done;
- __pyx_L5_unpacking_failed:;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_5 = NULL;
- if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);
- __PYX_ERR(0, 452, __pyx_L1_error)
- __pyx_L6_unpacking_done:;
- }
- __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 452, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 452, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_x1 = __pyx_t_7;
- __pyx_v_y1 = __pyx_t_6;
-
- /* "fontTools/pens/momentsPen.py":453
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- * x2, y2 = p2 # <<<<<<<<<<<<<<
- * x3, y3 = p3
- *
- */
- if ((likely(PyTuple_CheckExact(__pyx_v_p2))) || (PyList_CheckExact(__pyx_v_p2))) {
- PyObject* sequence = __pyx_v_p2;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(0, 453, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- if (likely(PyTuple_CheckExact(sequence))) {
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_1 = PyTuple_GET_ITEM(sequence, 1);
- } else {
- __pyx_t_3 = PyList_GET_ITEM(sequence, 0);
- __pyx_t_1 = PyList_GET_ITEM(sequence, 1);
- }
- __Pyx_INCREF(__pyx_t_3);
- __Pyx_INCREF(__pyx_t_1);
- #else
- __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 453, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __pyx_t_1 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 453, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- #endif
- } else {
- Py_ssize_t index = -1;
- __pyx_t_2 = PyObject_GetIter(__pyx_v_p2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 453, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext;
- index = 0; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L7_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_3);
- index = 1; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L7_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_1);
- if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 453, __pyx_L1_error)
- __pyx_t_5 = NULL;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- goto __pyx_L8_unpacking_done;
- __pyx_L7_unpacking_failed:;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_5 = NULL;
- if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);
- __PYX_ERR(0, 453, __pyx_L1_error)
- __pyx_L8_unpacking_done:;
- }
- __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 453, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 453, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_v_x2 = __pyx_t_6;
- __pyx_v_y2 = __pyx_t_7;
-
- /* "fontTools/pens/momentsPen.py":454
- * x1, y1 = p1
- * x2, y2 = p2
- * x3, y3 = p3 # <<<<<<<<<<<<<<
- *
- * r0 = 6 * y2
- */
- if ((likely(PyTuple_CheckExact(__pyx_v_p3))) || (PyList_CheckExact(__pyx_v_p3))) {
- PyObject* sequence = __pyx_v_p3;
- Py_ssize_t size = __Pyx_PySequence_SIZE(sequence);
- if (unlikely(size != 2)) {
- if (size > 2) __Pyx_RaiseTooManyValuesError(2);
- else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
- __PYX_ERR(0, 454, __pyx_L1_error)
- }
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- if (likely(PyTuple_CheckExact(sequence))) {
- __pyx_t_1 = PyTuple_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1);
- } else {
- __pyx_t_1 = PyList_GET_ITEM(sequence, 0);
- __pyx_t_3 = PyList_GET_ITEM(sequence, 1);
- }
- __Pyx_INCREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_t_3);
- #else
- __pyx_t_1 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 454, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_3 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 454, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- #endif
- } else {
- Py_ssize_t index = -1;
- __pyx_t_2 = PyObject_GetIter(__pyx_v_p3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 454, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __pyx_t_5 = Py_TYPE(__pyx_t_2)->tp_iternext;
- index = 0; __pyx_t_1 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_1)) goto __pyx_L9_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_1);
- index = 1; __pyx_t_3 = __pyx_t_5(__pyx_t_2); if (unlikely(!__pyx_t_3)) goto __pyx_L9_unpacking_failed;
- __Pyx_GOTREF(__pyx_t_3);
- if (__Pyx_IternextUnpackEndCheck(__pyx_t_5(__pyx_t_2), 2) < 0) __PYX_ERR(0, 454, __pyx_L1_error)
- __pyx_t_5 = NULL;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- goto __pyx_L10_unpacking_done;
- __pyx_L9_unpacking_failed:;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __pyx_t_5 = NULL;
- if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index);
- __PYX_ERR(0, 454, __pyx_L1_error)
- __pyx_L10_unpacking_done:;
- }
- __pyx_t_7 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_7 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 454, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_6 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_6 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 454, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __pyx_v_x3 = __pyx_t_7;
- __pyx_v_y3 = __pyx_t_6;
-
- /* "fontTools/pens/momentsPen.py":456
- * x3, y3 = p3
- *
- * r0 = 6 * y2 # <<<<<<<<<<<<<<
- * r1 = r0 * x3
- * r2 = 10 * y3
- */
- __pyx_v_r0 = (6.0 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":457
- *
- * r0 = 6 * y2
- * r1 = r0 * x3 # <<<<<<<<<<<<<<
- * r2 = 10 * y3
- * r3 = r2 * x3
- */
- __pyx_v_r1 = (__pyx_v_r0 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":458
- * r0 = 6 * y2
- * r1 = r0 * x3
- * r2 = 10 * y3 # <<<<<<<<<<<<<<
- * r3 = r2 * x3
- * r4 = 3 * y1
- */
- __pyx_v_r2 = (10.0 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":459
- * r1 = r0 * x3
- * r2 = 10 * y3
- * r3 = r2 * x3 # <<<<<<<<<<<<<<
- * r4 = 3 * y1
- * r5 = 6 * x1
- */
- __pyx_v_r3 = (__pyx_v_r2 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":460
- * r2 = 10 * y3
- * r3 = r2 * x3
- * r4 = 3 * y1 # <<<<<<<<<<<<<<
- * r5 = 6 * x1
- * r6 = 3 * x2
- */
- __pyx_v_r4 = (3.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":461
- * r3 = r2 * x3
- * r4 = 3 * y1
- * r5 = 6 * x1 # <<<<<<<<<<<<<<
- * r6 = 3 * x2
- * r7 = 6 * y1
- */
- __pyx_v_r5 = (6.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":462
- * r4 = 3 * y1
- * r5 = 6 * x1
- * r6 = 3 * x2 # <<<<<<<<<<<<<<
- * r7 = 6 * y1
- * r8 = 3 * y2
- */
- __pyx_v_r6 = (3.0 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":463
- * r5 = 6 * x1
- * r6 = 3 * x2
- * r7 = 6 * y1 # <<<<<<<<<<<<<<
- * r8 = 3 * y2
- * r9 = x2**2
- */
- __pyx_v_r7 = (6.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":464
- * r6 = 3 * x2
- * r7 = 6 * y1
- * r8 = 3 * y2 # <<<<<<<<<<<<<<
- * r9 = x2**2
- * r10 = 45 * r9
- */
- __pyx_v_r8 = (3.0 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":465
- * r7 = 6 * y1
- * r8 = 3 * y2
- * r9 = x2**2 # <<<<<<<<<<<<<<
- * r10 = 45 * r9
- * r11 = r10 * y3
- */
- __pyx_v_r9 = pow(__pyx_v_x2, 2.0);
-
- /* "fontTools/pens/momentsPen.py":466
- * r8 = 3 * y2
- * r9 = x2**2
- * r10 = 45 * r9 # <<<<<<<<<<<<<<
- * r11 = r10 * y3
- * r12 = x3**2
- */
- __pyx_v_r10 = (45.0 * __pyx_v_r9);
-
- /* "fontTools/pens/momentsPen.py":467
- * r9 = x2**2
- * r10 = 45 * r9
- * r11 = r10 * y3 # <<<<<<<<<<<<<<
- * r12 = x3**2
- * r13 = r12 * y2
- */
- __pyx_v_r11 = (__pyx_v_r10 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":468
- * r10 = 45 * r9
- * r11 = r10 * y3
- * r12 = x3**2 # <<<<<<<<<<<<<<
- * r13 = r12 * y2
- * r14 = r12 * y3
- */
- __pyx_v_r12 = pow(__pyx_v_x3, 2.0);
-
- /* "fontTools/pens/momentsPen.py":469
- * r11 = r10 * y3
- * r12 = x3**2
- * r13 = r12 * y2 # <<<<<<<<<<<<<<
- * r14 = r12 * y3
- * r15 = 7 * y3
- */
- __pyx_v_r13 = (__pyx_v_r12 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":470
- * r12 = x3**2
- * r13 = r12 * y2
- * r14 = r12 * y3 # <<<<<<<<<<<<<<
- * r15 = 7 * y3
- * r16 = 15 * x3
- */
- __pyx_v_r14 = (__pyx_v_r12 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":471
- * r13 = r12 * y2
- * r14 = r12 * y3
- * r15 = 7 * y3 # <<<<<<<<<<<<<<
- * r16 = 15 * x3
- * r17 = r16 * x2
- */
- __pyx_v_r15 = (7.0 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":472
- * r14 = r12 * y3
- * r15 = 7 * y3
- * r16 = 15 * x3 # <<<<<<<<<<<<<<
- * r17 = r16 * x2
- * r18 = x1**2
- */
- __pyx_v_r16 = (15.0 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":473
- * r15 = 7 * y3
- * r16 = 15 * x3
- * r17 = r16 * x2 # <<<<<<<<<<<<<<
- * r18 = x1**2
- * r19 = 9 * r18
- */
- __pyx_v_r17 = (__pyx_v_r16 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":474
- * r16 = 15 * x3
- * r17 = r16 * x2
- * r18 = x1**2 # <<<<<<<<<<<<<<
- * r19 = 9 * r18
- * r20 = x0**2
- */
- __pyx_v_r18 = pow(__pyx_v_x1, 2.0);
-
- /* "fontTools/pens/momentsPen.py":475
- * r17 = r16 * x2
- * r18 = x1**2
- * r19 = 9 * r18 # <<<<<<<<<<<<<<
- * r20 = x0**2
- * r21 = 21 * y1
- */
- __pyx_v_r19 = (9.0 * __pyx_v_r18);
-
- /* "fontTools/pens/momentsPen.py":476
- * r18 = x1**2
- * r19 = 9 * r18
- * r20 = x0**2 # <<<<<<<<<<<<<<
- * r21 = 21 * y1
- * r22 = 9 * r9
- */
- __pyx_v_r20 = pow(__pyx_v_x0, 2.0);
-
- /* "fontTools/pens/momentsPen.py":477
- * r19 = 9 * r18
- * r20 = x0**2
- * r21 = 21 * y1 # <<<<<<<<<<<<<<
- * r22 = 9 * r9
- * r23 = r7 * x3
- */
- __pyx_v_r21 = (21.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":478
- * r20 = x0**2
- * r21 = 21 * y1
- * r22 = 9 * r9 # <<<<<<<<<<<<<<
- * r23 = r7 * x3
- * r24 = 9 * y2
- */
- __pyx_v_r22 = (9.0 * __pyx_v_r9);
-
- /* "fontTools/pens/momentsPen.py":479
- * r21 = 21 * y1
- * r22 = 9 * r9
- * r23 = r7 * x3 # <<<<<<<<<<<<<<
- * r24 = 9 * y2
- * r25 = r24 * x2 + r3
- */
- __pyx_v_r23 = (__pyx_v_r7 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":480
- * r22 = 9 * r9
- * r23 = r7 * x3
- * r24 = 9 * y2 # <<<<<<<<<<<<<<
- * r25 = r24 * x2 + r3
- * r26 = 9 * x2
- */
- __pyx_v_r24 = (9.0 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":481
- * r23 = r7 * x3
- * r24 = 9 * y2
- * r25 = r24 * x2 + r3 # <<<<<<<<<<<<<<
- * r26 = 9 * x2
- * r27 = x2 * y3
- */
- __pyx_v_r25 = ((__pyx_v_r24 * __pyx_v_x2) + __pyx_v_r3);
-
- /* "fontTools/pens/momentsPen.py":482
- * r24 = 9 * y2
- * r25 = r24 * x2 + r3
- * r26 = 9 * x2 # <<<<<<<<<<<<<<
- * r27 = x2 * y3
- * r28 = -r26 * y1 + 15 * r27
- */
- __pyx_v_r26 = (9.0 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":483
- * r25 = r24 * x2 + r3
- * r26 = 9 * x2
- * r27 = x2 * y3 # <<<<<<<<<<<<<<
- * r28 = -r26 * y1 + 15 * r27
- * r29 = 3 * x1
- */
- __pyx_v_r27 = (__pyx_v_x2 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":484
- * r26 = 9 * x2
- * r27 = x2 * y3
- * r28 = -r26 * y1 + 15 * r27 # <<<<<<<<<<<<<<
- * r29 = 3 * x1
- * r30 = 45 * x1
- */
- __pyx_v_r28 = (((-__pyx_v_r26) * __pyx_v_y1) + (15.0 * __pyx_v_r27));
-
- /* "fontTools/pens/momentsPen.py":485
- * r27 = x2 * y3
- * r28 = -r26 * y1 + 15 * r27
- * r29 = 3 * x1 # <<<<<<<<<<<<<<
- * r30 = 45 * x1
- * r31 = 12 * x3
- */
- __pyx_v_r29 = (3.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":486
- * r28 = -r26 * y1 + 15 * r27
- * r29 = 3 * x1
- * r30 = 45 * x1 # <<<<<<<<<<<<<<
- * r31 = 12 * x3
- * r32 = 45 * r18
- */
- __pyx_v_r30 = (45.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":487
- * r29 = 3 * x1
- * r30 = 45 * x1
- * r31 = 12 * x3 # <<<<<<<<<<<<<<
- * r32 = 45 * r18
- * r33 = 5 * r12
- */
- __pyx_v_r31 = (12.0 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":488
- * r30 = 45 * x1
- * r31 = 12 * x3
- * r32 = 45 * r18 # <<<<<<<<<<<<<<
- * r33 = 5 * r12
- * r34 = r8 * x3
- */
- __pyx_v_r32 = (45.0 * __pyx_v_r18);
-
- /* "fontTools/pens/momentsPen.py":489
- * r31 = 12 * x3
- * r32 = 45 * r18
- * r33 = 5 * r12 # <<<<<<<<<<<<<<
- * r34 = r8 * x3
- * r35 = 105 * y0
- */
- __pyx_v_r33 = (5.0 * __pyx_v_r12);
-
- /* "fontTools/pens/momentsPen.py":490
- * r32 = 45 * r18
- * r33 = 5 * r12
- * r34 = r8 * x3 # <<<<<<<<<<<<<<
- * r35 = 105 * y0
- * r36 = 30 * y0
- */
- __pyx_v_r34 = (__pyx_v_r8 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":491
- * r33 = 5 * r12
- * r34 = r8 * x3
- * r35 = 105 * y0 # <<<<<<<<<<<<<<
- * r36 = 30 * y0
- * r37 = r36 * x2
- */
- __pyx_v_r35 = (105.0 * __pyx_v_y0);
-
- /* "fontTools/pens/momentsPen.py":492
- * r34 = r8 * x3
- * r35 = 105 * y0
- * r36 = 30 * y0 # <<<<<<<<<<<<<<
- * r37 = r36 * x2
- * r38 = 5 * x3
- */
- __pyx_v_r36 = (30.0 * __pyx_v_y0);
-
- /* "fontTools/pens/momentsPen.py":493
- * r35 = 105 * y0
- * r36 = 30 * y0
- * r37 = r36 * x2 # <<<<<<<<<<<<<<
- * r38 = 5 * x3
- * r39 = 15 * y3
- */
- __pyx_v_r37 = (__pyx_v_r36 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":494
- * r36 = 30 * y0
- * r37 = r36 * x2
- * r38 = 5 * x3 # <<<<<<<<<<<<<<
- * r39 = 15 * y3
- * r40 = 5 * y3
- */
- __pyx_v_r38 = (5.0 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":495
- * r37 = r36 * x2
- * r38 = 5 * x3
- * r39 = 15 * y3 # <<<<<<<<<<<<<<
- * r40 = 5 * y3
- * r41 = r40 * x3
- */
- __pyx_v_r39 = (15.0 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":496
- * r38 = 5 * x3
- * r39 = 15 * y3
- * r40 = 5 * y3 # <<<<<<<<<<<<<<
- * r41 = r40 * x3
- * r42 = x2 * y2
- */
- __pyx_v_r40 = (5.0 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":497
- * r39 = 15 * y3
- * r40 = 5 * y3
- * r41 = r40 * x3 # <<<<<<<<<<<<<<
- * r42 = x2 * y2
- * r43 = 18 * r42
- */
- __pyx_v_r41 = (__pyx_v_r40 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":498
- * r40 = 5 * y3
- * r41 = r40 * x3
- * r42 = x2 * y2 # <<<<<<<<<<<<<<
- * r43 = 18 * r42
- * r44 = 45 * y1
- */
- __pyx_v_r42 = (__pyx_v_x2 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":499
- * r41 = r40 * x3
- * r42 = x2 * y2
- * r43 = 18 * r42 # <<<<<<<<<<<<<<
- * r44 = 45 * y1
- * r45 = r41 + r43 + r44 * x1
- */
- __pyx_v_r43 = (18.0 * __pyx_v_r42);
-
- /* "fontTools/pens/momentsPen.py":500
- * r42 = x2 * y2
- * r43 = 18 * r42
- * r44 = 45 * y1 # <<<<<<<<<<<<<<
- * r45 = r41 + r43 + r44 * x1
- * r46 = y2 * y3
- */
- __pyx_v_r44 = (45.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":501
- * r43 = 18 * r42
- * r44 = 45 * y1
- * r45 = r41 + r43 + r44 * x1 # <<<<<<<<<<<<<<
- * r46 = y2 * y3
- * r47 = r46 * x3
- */
- __pyx_v_r45 = ((__pyx_v_r41 + __pyx_v_r43) + (__pyx_v_r44 * __pyx_v_x1));
-
- /* "fontTools/pens/momentsPen.py":502
- * r44 = 45 * y1
- * r45 = r41 + r43 + r44 * x1
- * r46 = y2 * y3 # <<<<<<<<<<<<<<
- * r47 = r46 * x3
- * r48 = y2**2
- */
- __pyx_v_r46 = (__pyx_v_y2 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":503
- * r45 = r41 + r43 + r44 * x1
- * r46 = y2 * y3
- * r47 = r46 * x3 # <<<<<<<<<<<<<<
- * r48 = y2**2
- * r49 = 45 * r48
- */
- __pyx_v_r47 = (__pyx_v_r46 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":504
- * r46 = y2 * y3
- * r47 = r46 * x3
- * r48 = y2**2 # <<<<<<<<<<<<<<
- * r49 = 45 * r48
- * r50 = r49 * x3
- */
- __pyx_v_r48 = pow(__pyx_v_y2, 2.0);
-
- /* "fontTools/pens/momentsPen.py":505
- * r47 = r46 * x3
- * r48 = y2**2
- * r49 = 45 * r48 # <<<<<<<<<<<<<<
- * r50 = r49 * x3
- * r51 = y3**2
- */
- __pyx_v_r49 = (45.0 * __pyx_v_r48);
-
- /* "fontTools/pens/momentsPen.py":506
- * r48 = y2**2
- * r49 = 45 * r48
- * r50 = r49 * x3 # <<<<<<<<<<<<<<
- * r51 = y3**2
- * r52 = r51 * x3
- */
- __pyx_v_r50 = (__pyx_v_r49 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":507
- * r49 = 45 * r48
- * r50 = r49 * x3
- * r51 = y3**2 # <<<<<<<<<<<<<<
- * r52 = r51 * x3
- * r53 = y1**2
- */
- __pyx_v_r51 = pow(__pyx_v_y3, 2.0);
-
- /* "fontTools/pens/momentsPen.py":508
- * r50 = r49 * x3
- * r51 = y3**2
- * r52 = r51 * x3 # <<<<<<<<<<<<<<
- * r53 = y1**2
- * r54 = 9 * r53
- */
- __pyx_v_r52 = (__pyx_v_r51 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":509
- * r51 = y3**2
- * r52 = r51 * x3
- * r53 = y1**2 # <<<<<<<<<<<<<<
- * r54 = 9 * r53
- * r55 = y0**2
- */
- __pyx_v_r53 = pow(__pyx_v_y1, 2.0);
-
- /* "fontTools/pens/momentsPen.py":510
- * r52 = r51 * x3
- * r53 = y1**2
- * r54 = 9 * r53 # <<<<<<<<<<<<<<
- * r55 = y0**2
- * r56 = 21 * x1
- */
- __pyx_v_r54 = (9.0 * __pyx_v_r53);
-
- /* "fontTools/pens/momentsPen.py":511
- * r53 = y1**2
- * r54 = 9 * r53
- * r55 = y0**2 # <<<<<<<<<<<<<<
- * r56 = 21 * x1
- * r57 = 6 * x2
- */
- __pyx_v_r55 = pow(__pyx_v_y0, 2.0);
-
- /* "fontTools/pens/momentsPen.py":512
- * r54 = 9 * r53
- * r55 = y0**2
- * r56 = 21 * x1 # <<<<<<<<<<<<<<
- * r57 = 6 * x2
- * r58 = r16 * y2
- */
- __pyx_v_r56 = (21.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":513
- * r55 = y0**2
- * r56 = 21 * x1
- * r57 = 6 * x2 # <<<<<<<<<<<<<<
- * r58 = r16 * y2
- * r59 = r39 * y2
- */
- __pyx_v_r57 = (6.0 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":514
- * r56 = 21 * x1
- * r57 = 6 * x2
- * r58 = r16 * y2 # <<<<<<<<<<<<<<
- * r59 = r39 * y2
- * r60 = 9 * r48
- */
- __pyx_v_r58 = (__pyx_v_r16 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":515
- * r57 = 6 * x2
- * r58 = r16 * y2
- * r59 = r39 * y2 # <<<<<<<<<<<<<<
- * r60 = 9 * r48
- * r61 = r6 * y3
- */
- __pyx_v_r59 = (__pyx_v_r39 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":516
- * r58 = r16 * y2
- * r59 = r39 * y2
- * r60 = 9 * r48 # <<<<<<<<<<<<<<
- * r61 = r6 * y3
- * r62 = 3 * y3
- */
- __pyx_v_r60 = (9.0 * __pyx_v_r48);
-
- /* "fontTools/pens/momentsPen.py":517
- * r59 = r39 * y2
- * r60 = 9 * r48
- * r61 = r6 * y3 # <<<<<<<<<<<<<<
- * r62 = 3 * y3
- * r63 = r36 * y2
- */
- __pyx_v_r61 = (__pyx_v_r6 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":518
- * r60 = 9 * r48
- * r61 = r6 * y3
- * r62 = 3 * y3 # <<<<<<<<<<<<<<
- * r63 = r36 * y2
- * r64 = y1 * y3
- */
- __pyx_v_r62 = (3.0 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":519
- * r61 = r6 * y3
- * r62 = 3 * y3
- * r63 = r36 * y2 # <<<<<<<<<<<<<<
- * r64 = y1 * y3
- * r65 = 45 * r53
- */
- __pyx_v_r63 = (__pyx_v_r36 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":520
- * r62 = 3 * y3
- * r63 = r36 * y2
- * r64 = y1 * y3 # <<<<<<<<<<<<<<
- * r65 = 45 * r53
- * r66 = 5 * r51
- */
- __pyx_v_r64 = (__pyx_v_y1 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":521
- * r63 = r36 * y2
- * r64 = y1 * y3
- * r65 = 45 * r53 # <<<<<<<<<<<<<<
- * r66 = 5 * r51
- * r67 = x2**3
- */
- __pyx_v_r65 = (45.0 * __pyx_v_r53);
-
- /* "fontTools/pens/momentsPen.py":522
- * r64 = y1 * y3
- * r65 = 45 * r53
- * r66 = 5 * r51 # <<<<<<<<<<<<<<
- * r67 = x2**3
- * r68 = x3**3
- */
- __pyx_v_r66 = (5.0 * __pyx_v_r51);
-
- /* "fontTools/pens/momentsPen.py":523
- * r65 = 45 * r53
- * r66 = 5 * r51
- * r67 = x2**3 # <<<<<<<<<<<<<<
- * r68 = x3**3
- * r69 = 630 * y2
- */
- __pyx_v_r67 = pow(__pyx_v_x2, 3.0);
-
- /* "fontTools/pens/momentsPen.py":524
- * r66 = 5 * r51
- * r67 = x2**3
- * r68 = x3**3 # <<<<<<<<<<<<<<
- * r69 = 630 * y2
- * r70 = 126 * x3
- */
- __pyx_v_r68 = pow(__pyx_v_x3, 3.0);
-
- /* "fontTools/pens/momentsPen.py":525
- * r67 = x2**3
- * r68 = x3**3
- * r69 = 630 * y2 # <<<<<<<<<<<<<<
- * r70 = 126 * x3
- * r71 = x1**3
- */
- __pyx_v_r69 = (630.0 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":526
- * r68 = x3**3
- * r69 = 630 * y2
- * r70 = 126 * x3 # <<<<<<<<<<<<<<
- * r71 = x1**3
- * r72 = 126 * x2
- */
- __pyx_v_r70 = (126.0 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":527
- * r69 = 630 * y2
- * r70 = 126 * x3
- * r71 = x1**3 # <<<<<<<<<<<<<<
- * r72 = 126 * x2
- * r73 = 63 * r9
- */
- __pyx_v_r71 = pow(__pyx_v_x1, 3.0);
-
- /* "fontTools/pens/momentsPen.py":528
- * r70 = 126 * x3
- * r71 = x1**3
- * r72 = 126 * x2 # <<<<<<<<<<<<<<
- * r73 = 63 * r9
- * r74 = r73 * x3
- */
- __pyx_v_r72 = (126.0 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":529
- * r71 = x1**3
- * r72 = 126 * x2
- * r73 = 63 * r9 # <<<<<<<<<<<<<<
- * r74 = r73 * x3
- * r75 = r15 * x3 + 15 * r42
- */
- __pyx_v_r73 = (63.0 * __pyx_v_r9);
-
- /* "fontTools/pens/momentsPen.py":530
- * r72 = 126 * x2
- * r73 = 63 * r9
- * r74 = r73 * x3 # <<<<<<<<<<<<<<
- * r75 = r15 * x3 + 15 * r42
- * r76 = 630 * x1
- */
- __pyx_v_r74 = (__pyx_v_r73 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":531
- * r73 = 63 * r9
- * r74 = r73 * x3
- * r75 = r15 * x3 + 15 * r42 # <<<<<<<<<<<<<<
- * r76 = 630 * x1
- * r77 = 14 * x3
- */
- __pyx_v_r75 = ((__pyx_v_r15 * __pyx_v_x3) + (15.0 * __pyx_v_r42));
-
- /* "fontTools/pens/momentsPen.py":532
- * r74 = r73 * x3
- * r75 = r15 * x3 + 15 * r42
- * r76 = 630 * x1 # <<<<<<<<<<<<<<
- * r77 = 14 * x3
- * r78 = 21 * r27
- */
- __pyx_v_r76 = (630.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":533
- * r75 = r15 * x3 + 15 * r42
- * r76 = 630 * x1
- * r77 = 14 * x3 # <<<<<<<<<<<<<<
- * r78 = 21 * r27
- * r79 = 42 * x1
- */
- __pyx_v_r77 = (14.0 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":534
- * r76 = 630 * x1
- * r77 = 14 * x3
- * r78 = 21 * r27 # <<<<<<<<<<<<<<
- * r79 = 42 * x1
- * r80 = 42 * x2
- */
- __pyx_v_r78 = (21.0 * __pyx_v_r27);
-
- /* "fontTools/pens/momentsPen.py":535
- * r77 = 14 * x3
- * r78 = 21 * r27
- * r79 = 42 * x1 # <<<<<<<<<<<<<<
- * r80 = 42 * x2
- * r81 = x1 * y2
- */
- __pyx_v_r79 = (42.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":536
- * r78 = 21 * r27
- * r79 = 42 * x1
- * r80 = 42 * x2 # <<<<<<<<<<<<<<
- * r81 = x1 * y2
- * r82 = 63 * r42
- */
- __pyx_v_r80 = (42.0 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":537
- * r79 = 42 * x1
- * r80 = 42 * x2
- * r81 = x1 * y2 # <<<<<<<<<<<<<<
- * r82 = 63 * r42
- * r83 = x1 * y1
- */
- __pyx_v_r81 = (__pyx_v_x1 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":538
- * r80 = 42 * x2
- * r81 = x1 * y2
- * r82 = 63 * r42 # <<<<<<<<<<<<<<
- * r83 = x1 * y1
- * r84 = r41 + r82 + 378 * r83
- */
- __pyx_v_r82 = (63.0 * __pyx_v_r42);
-
- /* "fontTools/pens/momentsPen.py":539
- * r81 = x1 * y2
- * r82 = 63 * r42
- * r83 = x1 * y1 # <<<<<<<<<<<<<<
- * r84 = r41 + r82 + 378 * r83
- * r85 = x2 * x3
- */
- __pyx_v_r83 = (__pyx_v_x1 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":540
- * r82 = 63 * r42
- * r83 = x1 * y1
- * r84 = r41 + r82 + 378 * r83 # <<<<<<<<<<<<<<
- * r85 = x2 * x3
- * r86 = r85 * y1
- */
- __pyx_v_r84 = ((__pyx_v_r41 + __pyx_v_r82) + (378.0 * __pyx_v_r83));
-
- /* "fontTools/pens/momentsPen.py":541
- * r83 = x1 * y1
- * r84 = r41 + r82 + 378 * r83
- * r85 = x2 * x3 # <<<<<<<<<<<<<<
- * r86 = r85 * y1
- * r87 = r27 * x3
- */
- __pyx_v_r85 = (__pyx_v_x2 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":542
- * r84 = r41 + r82 + 378 * r83
- * r85 = x2 * x3
- * r86 = r85 * y1 # <<<<<<<<<<<<<<
- * r87 = r27 * x3
- * r88 = 27 * r9
- */
- __pyx_v_r86 = (__pyx_v_r85 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":543
- * r85 = x2 * x3
- * r86 = r85 * y1
- * r87 = r27 * x3 # <<<<<<<<<<<<<<
- * r88 = 27 * r9
- * r89 = r88 * y2
- */
- __pyx_v_r87 = (__pyx_v_r27 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":544
- * r86 = r85 * y1
- * r87 = r27 * x3
- * r88 = 27 * r9 # <<<<<<<<<<<<<<
- * r89 = r88 * y2
- * r90 = 42 * r14
- */
- __pyx_v_r88 = (27.0 * __pyx_v_r9);
-
- /* "fontTools/pens/momentsPen.py":545
- * r87 = r27 * x3
- * r88 = 27 * r9
- * r89 = r88 * y2 # <<<<<<<<<<<<<<
- * r90 = 42 * r14
- * r91 = 90 * x1
- */
- __pyx_v_r89 = (__pyx_v_r88 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":546
- * r88 = 27 * r9
- * r89 = r88 * y2
- * r90 = 42 * r14 # <<<<<<<<<<<<<<
- * r91 = 90 * x1
- * r92 = 189 * r18
- */
- __pyx_v_r90 = (42.0 * __pyx_v_r14);
-
- /* "fontTools/pens/momentsPen.py":547
- * r89 = r88 * y2
- * r90 = 42 * r14
- * r91 = 90 * x1 # <<<<<<<<<<<<<<
- * r92 = 189 * r18
- * r93 = 378 * r18
- */
- __pyx_v_r91 = (90.0 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":548
- * r90 = 42 * r14
- * r91 = 90 * x1
- * r92 = 189 * r18 # <<<<<<<<<<<<<<
- * r93 = 378 * r18
- * r94 = r12 * y1
- */
- __pyx_v_r92 = (189.0 * __pyx_v_r18);
-
- /* "fontTools/pens/momentsPen.py":549
- * r91 = 90 * x1
- * r92 = 189 * r18
- * r93 = 378 * r18 # <<<<<<<<<<<<<<
- * r94 = r12 * y1
- * r95 = 252 * x1 * x2
- */
- __pyx_v_r93 = (378.0 * __pyx_v_r18);
-
- /* "fontTools/pens/momentsPen.py":550
- * r92 = 189 * r18
- * r93 = 378 * r18
- * r94 = r12 * y1 # <<<<<<<<<<<<<<
- * r95 = 252 * x1 * x2
- * r96 = r79 * x3
- */
- __pyx_v_r94 = (__pyx_v_r12 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":551
- * r93 = 378 * r18
- * r94 = r12 * y1
- * r95 = 252 * x1 * x2 # <<<<<<<<<<<<<<
- * r96 = r79 * x3
- * r97 = 30 * r85
- */
- __pyx_v_r95 = ((252.0 * __pyx_v_x1) * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":552
- * r94 = r12 * y1
- * r95 = 252 * x1 * x2
- * r96 = r79 * x3 # <<<<<<<<<<<<<<
- * r97 = 30 * r85
- * r98 = r83 * x3
- */
- __pyx_v_r96 = (__pyx_v_r79 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":553
- * r95 = 252 * x1 * x2
- * r96 = r79 * x3
- * r97 = 30 * r85 # <<<<<<<<<<<<<<
- * r98 = r83 * x3
- * r99 = 30 * x3
- */
- __pyx_v_r97 = (30.0 * __pyx_v_r85);
-
- /* "fontTools/pens/momentsPen.py":554
- * r96 = r79 * x3
- * r97 = 30 * r85
- * r98 = r83 * x3 # <<<<<<<<<<<<<<
- * r99 = 30 * x3
- * r100 = 42 * x3
- */
- __pyx_v_r98 = (__pyx_v_r83 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":555
- * r97 = 30 * r85
- * r98 = r83 * x3
- * r99 = 30 * x3 # <<<<<<<<<<<<<<
- * r100 = 42 * x3
- * r101 = r42 * x1
- */
- __pyx_v_r99 = (30.0 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":556
- * r98 = r83 * x3
- * r99 = 30 * x3
- * r100 = 42 * x3 # <<<<<<<<<<<<<<
- * r101 = r42 * x1
- * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99
- */
- __pyx_v_r100 = (42.0 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":557
- * r99 = 30 * x3
- * r100 = 42 * x3
- * r101 = r42 * x1 # <<<<<<<<<<<<<<
- * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99
- * r103 = 378 * r48
- */
- __pyx_v_r101 = (__pyx_v_r42 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":558
- * r100 = 42 * x3
- * r101 = r42 * x1
- * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 # <<<<<<<<<<<<<<
- * r103 = 378 * r48
- * r104 = 18 * y1
- */
- __pyx_v_r102 = ((((__pyx_v_r10 * __pyx_v_y2) + (14.0 * __pyx_v_r14)) + ((126.0 * __pyx_v_r18) * __pyx_v_y1)) + (__pyx_v_r81 * __pyx_v_r99));
-
- /* "fontTools/pens/momentsPen.py":559
- * r101 = r42 * x1
- * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99
- * r103 = 378 * r48 # <<<<<<<<<<<<<<
- * r104 = 18 * y1
- * r105 = r104 * y2
- */
- __pyx_v_r103 = (378.0 * __pyx_v_r48);
-
- /* "fontTools/pens/momentsPen.py":560
- * r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99
- * r103 = 378 * r48
- * r104 = 18 * y1 # <<<<<<<<<<<<<<
- * r105 = r104 * y2
- * r106 = y0 * y1
- */
- __pyx_v_r104 = (18.0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":561
- * r103 = 378 * r48
- * r104 = 18 * y1
- * r105 = r104 * y2 # <<<<<<<<<<<<<<
- * r106 = y0 * y1
- * r107 = 252 * y2
- */
- __pyx_v_r105 = (__pyx_v_r104 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":562
- * r104 = 18 * y1
- * r105 = r104 * y2
- * r106 = y0 * y1 # <<<<<<<<<<<<<<
- * r107 = 252 * y2
- * r108 = r107 * y0
- */
- __pyx_v_r106 = (__pyx_v_y0 * __pyx_v_y1);
-
- /* "fontTools/pens/momentsPen.py":563
- * r105 = r104 * y2
- * r106 = y0 * y1
- * r107 = 252 * y2 # <<<<<<<<<<<<<<
- * r108 = r107 * y0
- * r109 = y0 * y3
- */
- __pyx_v_r107 = (252.0 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":564
- * r106 = y0 * y1
- * r107 = 252 * y2
- * r108 = r107 * y0 # <<<<<<<<<<<<<<
- * r109 = y0 * y3
- * r110 = 42 * r64
- */
- __pyx_v_r108 = (__pyx_v_r107 * __pyx_v_y0);
-
- /* "fontTools/pens/momentsPen.py":565
- * r107 = 252 * y2
- * r108 = r107 * y0
- * r109 = y0 * y3 # <<<<<<<<<<<<<<
- * r110 = 42 * r64
- * r111 = 378 * r53
- */
- __pyx_v_r109 = (__pyx_v_y0 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":566
- * r108 = r107 * y0
- * r109 = y0 * y3
- * r110 = 42 * r64 # <<<<<<<<<<<<<<
- * r111 = 378 * r53
- * r112 = 63 * r48
- */
- __pyx_v_r110 = (42.0 * __pyx_v_r64);
-
- /* "fontTools/pens/momentsPen.py":567
- * r109 = y0 * y3
- * r110 = 42 * r64
- * r111 = 378 * r53 # <<<<<<<<<<<<<<
- * r112 = 63 * r48
- * r113 = 27 * x2
- */
- __pyx_v_r111 = (378.0 * __pyx_v_r53);
-
- /* "fontTools/pens/momentsPen.py":568
- * r110 = 42 * r64
- * r111 = 378 * r53
- * r112 = 63 * r48 # <<<<<<<<<<<<<<
- * r113 = 27 * x2
- * r114 = r27 * y2
- */
- __pyx_v_r112 = (63.0 * __pyx_v_r48);
-
- /* "fontTools/pens/momentsPen.py":569
- * r111 = 378 * r53
- * r112 = 63 * r48
- * r113 = 27 * x2 # <<<<<<<<<<<<<<
- * r114 = r27 * y2
- * r115 = r113 * r48 + 42 * r52
- */
- __pyx_v_r113 = (27.0 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":570
- * r112 = 63 * r48
- * r113 = 27 * x2
- * r114 = r27 * y2 # <<<<<<<<<<<<<<
- * r115 = r113 * r48 + 42 * r52
- * r116 = x3 * y3
- */
- __pyx_v_r114 = (__pyx_v_r27 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":571
- * r113 = 27 * x2
- * r114 = r27 * y2
- * r115 = r113 * r48 + 42 * r52 # <<<<<<<<<<<<<<
- * r116 = x3 * y3
- * r117 = 54 * r42
- */
- __pyx_v_r115 = ((__pyx_v_r113 * __pyx_v_r48) + (42.0 * __pyx_v_r52));
-
- /* "fontTools/pens/momentsPen.py":572
- * r114 = r27 * y2
- * r115 = r113 * r48 + 42 * r52
- * r116 = x3 * y3 # <<<<<<<<<<<<<<
- * r117 = 54 * r42
- * r118 = r51 * x1
- */
- __pyx_v_r116 = (__pyx_v_x3 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":573
- * r115 = r113 * r48 + 42 * r52
- * r116 = x3 * y3
- * r117 = 54 * r42 # <<<<<<<<<<<<<<
- * r118 = r51 * x1
- * r119 = r51 * x2
- */
- __pyx_v_r117 = (54.0 * __pyx_v_r42);
-
- /* "fontTools/pens/momentsPen.py":574
- * r116 = x3 * y3
- * r117 = 54 * r42
- * r118 = r51 * x1 # <<<<<<<<<<<<<<
- * r119 = r51 * x2
- * r120 = r48 * x1
- */
- __pyx_v_r118 = (__pyx_v_r51 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":575
- * r117 = 54 * r42
- * r118 = r51 * x1
- * r119 = r51 * x2 # <<<<<<<<<<<<<<
- * r120 = r48 * x1
- * r121 = 21 * x3
- */
- __pyx_v_r119 = (__pyx_v_r51 * __pyx_v_x2);
-
- /* "fontTools/pens/momentsPen.py":576
- * r118 = r51 * x1
- * r119 = r51 * x2
- * r120 = r48 * x1 # <<<<<<<<<<<<<<
- * r121 = 21 * x3
- * r122 = r64 * x1
- */
- __pyx_v_r120 = (__pyx_v_r48 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":577
- * r119 = r51 * x2
- * r120 = r48 * x1
- * r121 = 21 * x3 # <<<<<<<<<<<<<<
- * r122 = r64 * x1
- * r123 = r81 * y3
- */
- __pyx_v_r121 = (21.0 * __pyx_v_x3);
-
- /* "fontTools/pens/momentsPen.py":578
- * r120 = r48 * x1
- * r121 = 21 * x3
- * r122 = r64 * x1 # <<<<<<<<<<<<<<
- * r123 = r81 * y3
- * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1
- */
- __pyx_v_r122 = (__pyx_v_r64 * __pyx_v_x1);
-
- /* "fontTools/pens/momentsPen.py":579
- * r121 = 21 * x3
- * r122 = r64 * x1
- * r123 = r81 * y3 # <<<<<<<<<<<<<<
- * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1
- * r125 = y2**3
- */
- __pyx_v_r123 = (__pyx_v_r81 * __pyx_v_y3);
-
- /* "fontTools/pens/momentsPen.py":580
- * r122 = r64 * x1
- * r123 = r81 * y3
- * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 # <<<<<<<<<<<<<<
- * r125 = y2**3
- * r126 = y3**3
- */
- __pyx_v_r124 = (((((30.0 * __pyx_v_r27) * __pyx_v_y1) + (__pyx_v_r49 * __pyx_v_x2)) + (14.0 * __pyx_v_r52)) + ((126.0 * __pyx_v_r53) * __pyx_v_x1));
-
- /* "fontTools/pens/momentsPen.py":581
- * r123 = r81 * y3
- * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1
- * r125 = y2**3 # <<<<<<<<<<<<<<
- * r126 = y3**3
- * r127 = y1**3
- */
- __pyx_v_r125 = pow(__pyx_v_y2, 3.0);
-
- /* "fontTools/pens/momentsPen.py":582
- * r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1
- * r125 = y2**3
- * r126 = y3**3 # <<<<<<<<<<<<<<
- * r127 = y1**3
- * r128 = y0**3
- */
- __pyx_v_r126 = pow(__pyx_v_y3, 3.0);
-
- /* "fontTools/pens/momentsPen.py":583
- * r125 = y2**3
- * r126 = y3**3
- * r127 = y1**3 # <<<<<<<<<<<<<<
- * r128 = y0**3
- * r129 = r51 * y2
- */
- __pyx_v_r127 = pow(__pyx_v_y1, 3.0);
-
- /* "fontTools/pens/momentsPen.py":584
- * r126 = y3**3
- * r127 = y1**3
- * r128 = y0**3 # <<<<<<<<<<<<<<
- * r129 = r51 * y2
- * r130 = r112 * y3 + r21 * r51
- */
- __pyx_v_r128 = pow(__pyx_v_y0, 3.0);
-
- /* "fontTools/pens/momentsPen.py":585
- * r127 = y1**3
- * r128 = y0**3
- * r129 = r51 * y2 # <<<<<<<<<<<<<<
- * r130 = r112 * y3 + r21 * r51
- * r131 = 189 * r53
- */
- __pyx_v_r129 = (__pyx_v_r51 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":586
- * r128 = y0**3
- * r129 = r51 * y2
- * r130 = r112 * y3 + r21 * r51 # <<<<<<<<<<<<<<
- * r131 = 189 * r53
- * r132 = 90 * y2
- */
- __pyx_v_r130 = ((__pyx_v_r112 * __pyx_v_y3) + (__pyx_v_r21 * __pyx_v_r51));
-
- /* "fontTools/pens/momentsPen.py":587
- * r129 = r51 * y2
- * r130 = r112 * y3 + r21 * r51
- * r131 = 189 * r53 # <<<<<<<<<<<<<<
- * r132 = 90 * y2
- *
- */
- __pyx_v_r131 = (189.0 * __pyx_v_r53);
-
- /* "fontTools/pens/momentsPen.py":588
- * r130 = r112 * y3 + r21 * r51
- * r131 = 189 * r53
- * r132 = 90 * y2 # <<<<<<<<<<<<<<
- *
- * self.area += (
- */
- __pyx_v_r132 = (90.0 * __pyx_v_y2);
-
- /* "fontTools/pens/momentsPen.py":590
- * r132 = 90 * y2
- *
- * self.area += ( # <<<<<<<<<<<<<<
- * -r1 / 20
- * - r3 / 20
- */
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_area); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 590, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "fontTools/pens/momentsPen.py":597
- * + 3 * x1 * (y2 + y3) / 20
- * + 3 * x2 * y3 / 10
- * - y0 * (r5 + r6 + x3) / 20 # <<<<<<<<<<<<<<
- * )
- * self.momentX += (
- */
- __pyx_t_1 = PyFloat_FromDouble(((((((((-__pyx_v_r1) / 20.0) - (__pyx_v_r3 / 20.0)) - ((__pyx_v_r4 * (__pyx_v_x2 + __pyx_v_x3)) / 20.0)) + ((__pyx_v_x0 * (((__pyx_v_r7 + __pyx_v_r8) + (10.0 * __pyx_v_y0)) + __pyx_v_y3)) / 20.0)) + (((3.0 * __pyx_v_x1) * (__pyx_v_y2 + __pyx_v_y3)) / 20.0)) + (((3.0 * __pyx_v_x2) * __pyx_v_y3) / 10.0)) - ((__pyx_v_y0 * ((__pyx_v_r5 + __pyx_v_r6) + __pyx_v_x3)) / 20.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 597, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":590
- * r132 = 90 * y2
- *
- * self.area += ( # <<<<<<<<<<<<<<
- * -r1 / 20
- * - r3 / 20
- */
- __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 590, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_area, __pyx_t_2) < 0) __PYX_ERR(0, 590, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":599
- * - y0 * (r5 + r6 + x3) / 20
- * )
- * self.momentX += ( # <<<<<<<<<<<<<<
- * r11 / 840
- * - r13 / 8
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 599, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "fontTools/pens/momentsPen.py":621
- * )
- * / 840
- * - y0 * (r17 + r30 * x2 + r31 * x1 + r32 + r33 + 18 * r9) / 840 # <<<<<<<<<<<<<<
- * )
- * self.momentY += (
- */
- __pyx_t_1 = PyFloat_FromDouble(((((((((((__pyx_v_r11 / 840.0) - (__pyx_v_r13 / 8.0)) - (__pyx_v_r14 / 3.0)) - ((__pyx_v_r17 * ((-__pyx_v_r15) + __pyx_v_r8)) / 840.0)) + ((__pyx_v_r19 * (__pyx_v_r8 + (2.0 * __pyx_v_y3))) / 840.0)) + ((__pyx_v_r20 * (((__pyx_v_r0 + __pyx_v_r21) + (56.0 * __pyx_v_y0)) + __pyx_v_y3)) / 168.0)) + ((__pyx_v_r29 * (((-__pyx_v_r23) + __pyx_v_r25) + __pyx_v_r28)) / 840.0)) - ((__pyx_v_r4 * (((10.0 * __pyx_v_r12) + __pyx_v_r17) + __pyx_v_r22)) / 840.0)) + ((__pyx_v_x0 * (((((((((12.0 * __pyx_v_r27) + (__pyx_v_r30 * __pyx_v_y2)) + __pyx_v_r34) - (__pyx_v_r35 * __pyx_v_x1)) - __pyx_v_r37) - (__pyx_v_r38 * __pyx_v_y0)) + (__pyx_v_r39 * __pyx_v_x1)) - (__pyx_v_r4 * __pyx_v_x3)) + __pyx_v_r45)) / 840.0)) - ((__pyx_v_y0 * (((((__pyx_v_r17 + (__pyx_v_r30 * __pyx_v_x2)) + (__pyx_v_r31 * __pyx_v_x1)) + __pyx_v_r32) + __pyx_v_r33) + (18.0 * __pyx_v_r9))) / 840.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 621, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":599
- * - y0 * (r5 + r6 + x3) / 20
- * )
- * self.momentX += ( # <<<<<<<<<<<<<<
- * r11 / 840
- * - r13 / 8
- */
- __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 599, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentX, __pyx_t_3) < 0) __PYX_ERR(0, 599, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "fontTools/pens/momentsPen.py":623
- * - y0 * (r17 + r30 * x2 + r31 * x1 + r32 + r33 + 18 * r9) / 840
- * )
- * self.momentY += ( # <<<<<<<<<<<<<<
- * -r4 * (r25 + r58) / 840
- * - r47 / 8
- */
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 623, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "fontTools/pens/momentsPen.py":646
- * + x1 * (r24 * y1 + 10 * r51 + r59 + r60 + r7 * y3) / 280
- * + x2 * y3 * (r15 + r8) / 56
- * - y0 * (r16 * y1 + r31 * y2 + r44 * x2 + r45 + r61 - r62 * x1) / 840 # <<<<<<<<<<<<<<
- * )
- * self.momentXX += (
- */
- __pyx_t_1 = PyFloat_FromDouble(((((((((((((-__pyx_v_r4) * (__pyx_v_r25 + __pyx_v_r58)) / 840.0) - (__pyx_v_r47 / 8.0)) - (__pyx_v_r50 / 840.0)) - (__pyx_v_r52 / 6.0)) - ((__pyx_v_r54 * (__pyx_v_r6 + (2.0 * __pyx_v_x3))) / 840.0)) - ((__pyx_v_r55 * ((__pyx_v_r56 + __pyx_v_r57) + __pyx_v_x3)) / 168.0)) + ((__pyx_v_x0 * ((((((((((__pyx_v_r35 * __pyx_v_y1) + (__pyx_v_r40 * __pyx_v_y0)) + (__pyx_v_r44 * __pyx_v_y2)) + (18.0 * __pyx_v_r48)) + (140.0 * __pyx_v_r55)) + __pyx_v_r59) + __pyx_v_r63) + (12.0 * __pyx_v_r64)) + __pyx_v_r65) + __pyx_v_r66)) / 840.0)) + ((__pyx_v_x1 * (((((__pyx_v_r24 * __pyx_v_y1) + (10.0 * __pyx_v_r51)) + __pyx_v_r59) + __pyx_v_r60) + (__pyx_v_r7 * __pyx_v_y3))) / 280.0)) + (((__pyx_v_x2 * __pyx_v_y3) * (__pyx_v_r15 + __pyx_v_r8)) / 56.0)) - ((__pyx_v_y0 * ((((((__pyx_v_r16 * __pyx_v_y1) + (__pyx_v_r31 * __pyx_v_y2)) + (__pyx_v_r44 * __pyx_v_x2)) + __pyx_v_r45) + __pyx_v_r61) - (__pyx_v_r62 * __pyx_v_x1))) / 840.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 646, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":623
- * - y0 * (r17 + r30 * x2 + r31 * x1 + r32 + r33 + 18 * r9) / 840
- * )
- * self.momentY += ( # <<<<<<<<<<<<<<
- * -r4 * (r25 + r58) / 840
- * - r47 / 8
- */
- __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 623, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentY, __pyx_t_2) < 0) __PYX_ERR(0, 623, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":648
- * - y0 * (r16 * y1 + r31 * y2 + r44 * x2 + r45 + r61 - r62 * x1) / 840
- * )
- * self.momentXX += ( # <<<<<<<<<<<<<<
- * -r12 * r72 * (-r40 + r8) / 9240
- * + 3 * r18 * (r28 + r34 - r38 * y1 + r75) / 3080
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXX); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 648, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "fontTools/pens/momentsPen.py":706
- * )
- * / 9240
- * - y0 # <<<<<<<<<<<<<<
- * * (
- * r12 * r56
- */
- __pyx_t_1 = PyFloat_FromDouble(((((((((((((((((-__pyx_v_r12) * __pyx_v_r72) * ((-__pyx_v_r40) + __pyx_v_r8)) / 9240.0) + (((3.0 * __pyx_v_r18) * (((__pyx_v_r28 + __pyx_v_r34) - (__pyx_v_r38 * __pyx_v_y1)) + __pyx_v_r75)) / 3080.0)) + ((__pyx_v_r20 * (((((((((__pyx_v_r24 * __pyx_v_x3) - (__pyx_v_r72 * __pyx_v_y0)) - (__pyx_v_r76 * __pyx_v_y0)) - (__pyx_v_r77 * __pyx_v_y0)) + __pyx_v_r78) + (__pyx_v_r79 * __pyx_v_y3)) + (__pyx_v_r80 * __pyx_v_y1)) + (210.0 * __pyx_v_r81)) + __pyx_v_r84)) / 9240.0)) - ((__pyx_v_r29 * ((((((((__pyx_v_r12 * __pyx_v_r21) + (14.0 * __pyx_v_r13)) + (__pyx_v_r44 * __pyx_v_r9)) - (__pyx_v_r73 * __pyx_v_y3)) + (54.0 * __pyx_v_r86)) - (84.0 * __pyx_v_r87)) - __pyx_v_r89) - __pyx_v_r90)) / 9240.0)) - ((__pyx_v_r4 * (((((70.0 * __pyx_v_r12) * __pyx_v_x2) + (27.0 * __pyx_v_r67)) + (42.0 * __pyx_v_r68)) + __pyx_v_r74)) / 9240.0)) + (((3.0 * __pyx_v_r67) * __pyx_v_y3) / 220.0)) - ((__pyx_v_r68 * __pyx_v_r69) / 9240.0)) - ((__pyx_v_r68 * __pyx_v_y3) / 4.0)) - (((__pyx_v_r70 * __pyx_v_r9) * ((-__pyx_v_r62) + __pyx_v_y2)) / 9240.0)) + (((3.0 * __pyx_v_r71) * (__pyx_v_r24 + __pyx_v_r40)) / 3080.0)) + ((pow(__pyx_v_x0, 3.0) * (((__pyx_v_r24 + __pyx_v_r44) + (165.0 * __pyx_v_y0)) + __pyx_v_y3)) / 660.0)) + ((__pyx_v_x0 * (((((((((((((((((((__pyx_v_r100 * __pyx_v_r27) + (162.0 * __pyx_v_r101)) + __pyx_v_r102) + __pyx_v_r11) + ((63.0 * __pyx_v_r18) * __pyx_v_y3)) + (__pyx_v_r27 * __pyx_v_r91)) - (__pyx_v_r33 * __pyx_v_y0)) - (__pyx_v_r37 * __pyx_v_x3)) + (__pyx_v_r43 * __pyx_v_x3)) - (__pyx_v_r73 * __pyx_v_y0)) - (__pyx_v_r88 * __pyx_v_y1)) + (__pyx_v_r92 * __pyx_v_y2)) - (__pyx_v_r93 * __pyx_v_y0)) - (9.0 * __pyx_v_r94)) - (__pyx_v_r95 * __pyx_v_y0)) - (__pyx_v_r96 * __pyx_v_y0)) - (__pyx_v_r97 * __pyx_v_y1)) - (18.0 * __pyx_v_r98)) + ((__pyx_v_r99 * __pyx_v_x1) * __pyx_v_y3))) / 9240.0)) - ((__pyx_v_y0 * ((((((((((__pyx_v_r12 * __pyx_v_r56) + (__pyx_v_r12 * __pyx_v_r80)) + (__pyx_v_r32 * __pyx_v_x3)) + (45.0 * __pyx_v_r67)) + (14.0 * __pyx_v_r68)) + (126.0 * __pyx_v_r71)) + __pyx_v_r74) + (__pyx_v_r85 * __pyx_v_r91)) + ((135.0 * __pyx_v_r9) * __pyx_v_x1)) + (__pyx_v_r92 * __pyx_v_x2))) / 9240.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 706, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":648
- * - y0 * (r16 * y1 + r31 * y2 + r44 * x2 + r45 + r61 - r62 * x1) / 840
- * )
- * self.momentXX += ( # <<<<<<<<<<<<<<
- * -r12 * r72 * (-r40 + r8) / 9240
- * + 3 * r18 * (r28 + r34 - r38 * y1 + r75) / 3080
- */
- __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 648, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXX, __pyx_t_3) < 0) __PYX_ERR(0, 648, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "fontTools/pens/momentsPen.py":721
- * / 9240
- * )
- * self.momentXY += ( # <<<<<<<<<<<<<<
- * -r103 * r12 / 18480
- * - r12 * r51 / 8
- */
- __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentXY); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 721, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
-
- /* "fontTools/pens/momentsPen.py":783
- * )
- * / 3080
- * - y0 # <<<<<<<<<<<<<<
- * * (
- * 54 * r101
- */
- __pyx_t_1 = PyFloat_FromDouble((((((((((((((((-__pyx_v_r103) * __pyx_v_r12) / 18480.0) - ((__pyx_v_r12 * __pyx_v_r51) / 8.0)) - (((3.0 * __pyx_v_r14) * __pyx_v_y2) / 44.0)) + (((3.0 * __pyx_v_r18) * ((((__pyx_v_r105 + (__pyx_v_r2 * __pyx_v_y1)) + (18.0 * __pyx_v_r46)) + (15.0 * __pyx_v_r48)) + (7.0 * __pyx_v_r51))) / 6160.0)) + ((__pyx_v_r20 * ((((((((((1260.0 * __pyx_v_r106) + (__pyx_v_r107 * __pyx_v_y1)) + __pyx_v_r108) + (28.0 * __pyx_v_r109)) + __pyx_v_r110) + __pyx_v_r111) + __pyx_v_r112) + (30.0 * __pyx_v_r46)) + (2310.0 * __pyx_v_r55)) + __pyx_v_r66)) / 18480.0)) - ((__pyx_v_r54 * (((7.0 * __pyx_v_r12) + (18.0 * __pyx_v_r85)) + (15.0 * __pyx_v_r9))) / 18480.0)) - ((__pyx_v_r55 * (((((__pyx_v_r33 + __pyx_v_r73) + __pyx_v_r93) + __pyx_v_r95) + __pyx_v_r96) + __pyx_v_r97)) / 18480.0)) - ((__pyx_v_r7 * (((((42.0 * __pyx_v_r13) + (__pyx_v_r82 * __pyx_v_x3)) + (28.0 * __pyx_v_r87)) + __pyx_v_r89) + __pyx_v_r90)) / 18480.0)) - (((3.0 * __pyx_v_r85) * (__pyx_v_r48 - __pyx_v_r66)) / 220.0)) + ((((3.0 * __pyx_v_r9) * __pyx_v_y3) * (__pyx_v_r62 + (2.0 * __pyx_v_y2))) / 440.0)) + ((__pyx_v_x0 * (((((((((((((((((((((((-__pyx_v_r1) * __pyx_v_y0) - ((84.0 * __pyx_v_r106) * __pyx_v_x2)) + (__pyx_v_r109 * __pyx_v_r56)) + (54.0 * __pyx_v_r114)) + (__pyx_v_r117 * __pyx_v_y1)) + (15.0 * __pyx_v_r118)) + (21.0 * __pyx_v_r119)) + (81.0 * __pyx_v_r120)) + (__pyx_v_r121 * __pyx_v_r46)) + (54.0 * __pyx_v_r122)) + (60.0 * __pyx_v_r123)) + __pyx_v_r124) - ((__pyx_v_r21 * __pyx_v_x3) * __pyx_v_y0)) + (__pyx_v_r23 * __pyx_v_y3)) - (__pyx_v_r54 * __pyx_v_x3)) - (__pyx_v_r55 * __pyx_v_r72)) - (__pyx_v_r55 * __pyx_v_r76)) - (__pyx_v_r55 * __pyx_v_r77)) + ((__pyx_v_r57 * __pyx_v_y0) * __pyx_v_y3)) + (__pyx_v_r60 * __pyx_v_x3)) + ((84.0 * __pyx_v_r81) * __pyx_v_y0)) + ((189.0 * __pyx_v_r81) * __pyx_v_y1))) / 9240.0)) + ((__pyx_v_x1 * ((((((((__pyx_v_r104 * __pyx_v_r27) - (__pyx_v_r105 * __pyx_v_x3)) - (__pyx_v_r113 * __pyx_v_r53)) + (63.0 * __pyx_v_r114)) + __pyx_v_r115) - (__pyx_v_r16 * __pyx_v_r53)) + (28.0 * __pyx_v_r47)) + (__pyx_v_r51 * __pyx_v_r80))) / 3080.0)) - ((__pyx_v_y0 * (((((((((((((54.0 * __pyx_v_r101) + __pyx_v_r102) + (__pyx_v_r116 * __pyx_v_r5)) + (__pyx_v_r117 * __pyx_v_x3)) + (21.0 * __pyx_v_r13)) - (__pyx_v_r19 * __pyx_v_y3)) + (__pyx_v_r22 * __pyx_v_y3)) + (__pyx_v_r78 * __pyx_v_x3)) + ((189.0 * __pyx_v_r83) * __pyx_v_x2)) + (60.0 * __pyx_v_r86)) + ((81.0 * __pyx_v_r9) * __pyx_v_y1)) + (15.0 * __pyx_v_r94)) + (54.0 * __pyx_v_r98))) / 9240.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 783, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":721
- * / 9240
- * )
- * self.momentXY += ( # <<<<<<<<<<<<<<
- * -r103 * r12 / 18480
- * - r12 * r51 / 8
- */
- __pyx_t_2 = PyNumber_InPlaceAdd(__pyx_t_3, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 721, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentXY, __pyx_t_2) < 0) __PYX_ERR(0, 721, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":801
- * / 9240
- * )
- * self.momentYY += ( # <<<<<<<<<<<<<<
- * -r103 * r116 / 9240
- * - r125 * r70 / 9240
- */
- __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self, __pyx_n_s_momentYY); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 801, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "fontTools/pens/momentsPen.py":849
- * / 3080
- * + 3 * x2 * y3 * (r48 + r66 + r8 * y3) / 220
- * - y0 # <<<<<<<<<<<<<<
- * * (
- * r100 * r46
- */
- __pyx_t_1 = PyFloat_FromDouble((((((((((((((((-__pyx_v_r103) * __pyx_v_r116) / 9240.0) - ((__pyx_v_r125 * __pyx_v_r70) / 9240.0)) - ((__pyx_v_r126 * __pyx_v_x3) / 12.0)) - (((3.0 * __pyx_v_r127) * (__pyx_v_r26 + __pyx_v_r38)) / 3080.0)) - ((__pyx_v_r128 * ((__pyx_v_r26 + __pyx_v_r30) + __pyx_v_x3)) / 660.0)) - ((__pyx_v_r4 * ((((__pyx_v_r112 * __pyx_v_x3) + __pyx_v_r115) - (14.0 * __pyx_v_r119)) + (84.0 * __pyx_v_r47))) / 9240.0)) - ((__pyx_v_r52 * __pyx_v_r69) / 9240.0)) - ((__pyx_v_r54 * ((__pyx_v_r58 + __pyx_v_r61) + __pyx_v_r75)) / 9240.0)) - ((__pyx_v_r55 * ((((((__pyx_v_r100 * __pyx_v_y1) + (__pyx_v_r121 * __pyx_v_y2)) + (__pyx_v_r26 * __pyx_v_y3)) + (__pyx_v_r79 * __pyx_v_y2)) + __pyx_v_r84) + ((210.0 * __pyx_v_x2) * __pyx_v_y1))) / 9240.0)) + ((__pyx_v_x0 * (((((((((((((((((((__pyx_v_r108 * __pyx_v_y1) + (__pyx_v_r110 * __pyx_v_y0)) + (__pyx_v_r111 * __pyx_v_y0)) + (__pyx_v_r112 * __pyx_v_y0)) + (45.0 * __pyx_v_r125)) + (14.0 * __pyx_v_r126)) + (126.0 * __pyx_v_r127)) + (770.0 * __pyx_v_r128)) + (42.0 * __pyx_v_r129)) + __pyx_v_r130) + (__pyx_v_r131 * __pyx_v_y2)) + (__pyx_v_r132 * __pyx_v_r64)) + ((135.0 * __pyx_v_r48) * __pyx_v_y1)) + ((630.0 * __pyx_v_r55) * __pyx_v_y1)) + ((126.0 * __pyx_v_r55) * __pyx_v_y2)) + ((14.0 * __pyx_v_r55) * __pyx_v_y3)) + (__pyx_v_r63 * __pyx_v_y3)) + (__pyx_v_r65 * __pyx_v_y3)) + (__pyx_v_r66 * __pyx_v_y0))) / 9240.0)) + ((__pyx_v_x1 * ((((((((27.0 * __pyx_v_r125) + (42.0 * __pyx_v_r126)) + (70.0 * __pyx_v_r129)) + __pyx_v_r130) + (__pyx_v_r39 * __pyx_v_r53)) + (__pyx_v_r44 * __pyx_v_r48)) + ((27.0 * __pyx_v_r53) * __pyx_v_y2)) + ((54.0 * __pyx_v_r64) * __pyx_v_y2))) / 3080.0)) + ((((3.0 * __pyx_v_x2) * __pyx_v_y3) * ((__pyx_v_r48 + __pyx_v_r66) + (__pyx_v_r8 * __pyx_v_y3))) / 220.0)) - ((__pyx_v_y0 * (((((((((((((__pyx_v_r100 * __pyx_v_r46) + (18.0 * __pyx_v_r114)) - (9.0 * __pyx_v_r118)) - (27.0 * __pyx_v_r120)) - (18.0 * __pyx_v_r122)) - (30.0 * __pyx_v_r123)) + __pyx_v_r124) + (__pyx_v_r131 * __pyx_v_x2)) + ((__pyx_v_r132 * __pyx_v_x3) * __pyx_v_y1)) + ((162.0 * __pyx_v_r42) * __pyx_v_y1)) + __pyx_v_r50) + ((63.0 * __pyx_v_r53) * __pyx_v_x3)) + (__pyx_v_r64 * __pyx_v_r99))) / 9240.0))); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 849, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
-
- /* "fontTools/pens/momentsPen.py":801
- * / 9240
- * )
- * self.momentYY += ( # <<<<<<<<<<<<<<
- * -r103 * r116 / 9240
- * - r125 * r70 / 9240
- */
- __pyx_t_3 = PyNumber_InPlaceAdd(__pyx_t_2, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 801, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_3);
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__Pyx_PyObject_SetAttrStr(__pyx_v_self, __pyx_n_s_momentYY, __pyx_t_3) < 0) __PYX_ERR(0, 801, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
-
- /* "fontTools/pens/momentsPen.py":450
- * @cython.locals(x2=cython.double, y2=cython.double)
- * @cython.locals(x3=cython.double, y3=cython.double)
- * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
-
- /* function exit code */
- __pyx_r = Py_None; __Pyx_INCREF(Py_None);
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_3);
- __Pyx_XDECREF(__pyx_t_4);
- __Pyx_AddTraceback("fontTools.pens.momentsPen.MomentsPen._curveToOne", __pyx_clineno, __pyx_lineno, __pyx_filename);
- __pyx_r = NULL;
- __pyx_L0:;
- __Pyx_XGIVEREF(__pyx_r);
- __Pyx_RefNannyFinishContext();
- return __pyx_r;
-}
-
-static PyMethodDef __pyx_methods[] = {
- {0, 0, 0, 0}
-};
-
-#if PY_MAJOR_VERSION >= 3
-#if CYTHON_PEP489_MULTI_PHASE_INIT
-static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/
-static int __pyx_pymod_exec_momentsPen(PyObject* module); /*proto*/
-static PyModuleDef_Slot __pyx_moduledef_slots[] = {
- {Py_mod_create, (void*)__pyx_pymod_create},
- {Py_mod_exec, (void*)__pyx_pymod_exec_momentsPen},
- {0, NULL}
-};
-#endif
-
-static struct PyModuleDef __pyx_moduledef = {
- PyModuleDef_HEAD_INIT,
- "momentsPen",
- 0, /* m_doc */
- #if CYTHON_PEP489_MULTI_PHASE_INIT
- 0, /* m_size */
- #else
- -1, /* m_size */
- #endif
- __pyx_methods /* m_methods */,
- #if CYTHON_PEP489_MULTI_PHASE_INIT
- __pyx_moduledef_slots, /* m_slots */
- #else
- NULL, /* m_reload */
- #endif
- NULL, /* m_traverse */
- NULL, /* m_clear */
- NULL /* m_free */
-};
-#endif
-#ifndef CYTHON_SMALL_CODE
-#if defined(__clang__)
- #define CYTHON_SMALL_CODE
-#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3))
- #define CYTHON_SMALL_CODE __attribute__((cold))
-#else
- #define CYTHON_SMALL_CODE
-#endif
-#endif
-
-static __Pyx_StringTabEntry __pyx_string_tab[] = {
- {&__pyx_n_s_AttributeError, __pyx_k_AttributeError, sizeof(__pyx_k_AttributeError), 0, 0, 1, 1},
- {&__pyx_n_s_BasePen, __pyx_k_BasePen, sizeof(__pyx_k_BasePen), 0, 0, 1, 1},
- {&__pyx_n_s_COMPILED, __pyx_k_COMPILED, sizeof(__pyx_k_COMPILED), 0, 0, 1, 1},
- {&__pyx_kp_u_Green_theorem_is_not_defined_on, __pyx_k_Green_theorem_is_not_defined_on, sizeof(__pyx_k_Green_theorem_is_not_defined_on), 0, 1, 0, 0},
- {&__pyx_n_s_ImportError, __pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 0, 1, 1},
- {&__pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_k_Lib_fontTools_pens_momentsPen_py, sizeof(__pyx_k_Lib_fontTools_pens_momentsPen_py), 0, 0, 1, 0},
- {&__pyx_n_s_MomentsPen, __pyx_k_MomentsPen, sizeof(__pyx_k_MomentsPen), 0, 0, 1, 1},
- {&__pyx_n_u_MomentsPen, __pyx_k_MomentsPen, sizeof(__pyx_k_MomentsPen), 0, 1, 0, 1},
- {&__pyx_n_s_MomentsPen___init, __pyx_k_MomentsPen___init, sizeof(__pyx_k_MomentsPen___init), 0, 0, 1, 1},
- {&__pyx_n_s_MomentsPen__closePath, __pyx_k_MomentsPen__closePath, sizeof(__pyx_k_MomentsPen__closePath), 0, 0, 1, 1},
- {&__pyx_n_s_MomentsPen__curveToOne, __pyx_k_MomentsPen__curveToOne, sizeof(__pyx_k_MomentsPen__curveToOne), 0, 0, 1, 1},
- {&__pyx_n_s_MomentsPen__endPath, __pyx_k_MomentsPen__endPath, sizeof(__pyx_k_MomentsPen__endPath), 0, 0, 1, 1},
- {&__pyx_n_s_MomentsPen__lineTo, __pyx_k_MomentsPen__lineTo, sizeof(__pyx_k_MomentsPen__lineTo), 0, 0, 1, 1},
- {&__pyx_n_s_MomentsPen__moveTo, __pyx_k_MomentsPen__moveTo, sizeof(__pyx_k_MomentsPen__moveTo), 0, 0, 1, 1},
- {&__pyx_n_s_MomentsPen__qCurveToOne, __pyx_k_MomentsPen__qCurveToOne, sizeof(__pyx_k_MomentsPen__qCurveToOne), 0, 0, 1, 1},
- {&__pyx_n_s_MomentsPen__startPoint, __pyx_k_MomentsPen__startPoint, sizeof(__pyx_k_MomentsPen__startPoint), 0, 0, 1, 1},
- {&__pyx_n_s_OpenContourError, __pyx_k_OpenContourError, sizeof(__pyx_k_OpenContourError), 0, 0, 1, 1},
- {&__pyx_n_s_all, __pyx_k_all, sizeof(__pyx_k_all), 0, 0, 1, 1},
- {&__pyx_n_s_area, __pyx_k_area, sizeof(__pyx_k_area), 0, 0, 1, 1},
- {&__pyx_n_u_area, __pyx_k_area, sizeof(__pyx_k_area), 0, 1, 0, 1},
- {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1},
- {&__pyx_n_s_closePath, __pyx_k_closePath, sizeof(__pyx_k_closePath), 0, 0, 1, 1},
- {&__pyx_n_s_curveToOne, __pyx_k_curveToOne, sizeof(__pyx_k_curveToOne), 0, 0, 1, 1},
- {&__pyx_n_s_cython, __pyx_k_cython, sizeof(__pyx_k_cython), 0, 0, 1, 1},
- {&__pyx_n_s_doc, __pyx_k_doc, sizeof(__pyx_k_doc), 0, 0, 1, 1},
- {&__pyx_n_s_endPath, __pyx_k_endPath, sizeof(__pyx_k_endPath), 0, 0, 1, 1},
- {&__pyx_n_s_fontTools_misc, __pyx_k_fontTools_misc, sizeof(__pyx_k_fontTools_misc), 0, 0, 1, 1},
- {&__pyx_n_s_fontTools_misc_symfont, __pyx_k_fontTools_misc_symfont, sizeof(__pyx_k_fontTools_misc_symfont), 0, 0, 1, 1},
- {&__pyx_n_s_fontTools_pens_basePen, __pyx_k_fontTools_pens_basePen, sizeof(__pyx_k_fontTools_pens_basePen), 0, 0, 1, 1},
- {&__pyx_n_s_fontTools_pens_momentsPen, __pyx_k_fontTools_pens_momentsPen, sizeof(__pyx_k_fontTools_pens_momentsPen), 0, 0, 1, 1},
- {&__pyx_n_s_getCurrentPoint, __pyx_k_getCurrentPoint, sizeof(__pyx_k_getCurrentPoint), 0, 0, 1, 1},
- {&__pyx_n_s_glyphset, __pyx_k_glyphset, sizeof(__pyx_k_glyphset), 0, 0, 1, 1},
- {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1},
- {&__pyx_n_s_init, __pyx_k_init, sizeof(__pyx_k_init), 0, 0, 1, 1},
- {&__pyx_n_s_lineTo, __pyx_k_lineTo, sizeof(__pyx_k_lineTo), 0, 0, 1, 1},
- {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1},
- {&__pyx_n_u_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 1, 0, 1},
- {&__pyx_n_s_metaclass, __pyx_k_metaclass, sizeof(__pyx_k_metaclass), 0, 0, 1, 1},
- {&__pyx_n_s_module, __pyx_k_module, sizeof(__pyx_k_module), 0, 0, 1, 1},
- {&__pyx_n_s_momentX, __pyx_k_momentX, sizeof(__pyx_k_momentX), 0, 0, 1, 1},
- {&__pyx_n_u_momentX, __pyx_k_momentX, sizeof(__pyx_k_momentX), 0, 1, 0, 1},
- {&__pyx_n_s_momentXX, __pyx_k_momentXX, sizeof(__pyx_k_momentXX), 0, 0, 1, 1},
- {&__pyx_n_u_momentXX, __pyx_k_momentXX, sizeof(__pyx_k_momentXX), 0, 1, 0, 1},
- {&__pyx_n_s_momentXY, __pyx_k_momentXY, sizeof(__pyx_k_momentXY), 0, 0, 1, 1},
- {&__pyx_n_u_momentXY, __pyx_k_momentXY, sizeof(__pyx_k_momentXY), 0, 1, 0, 1},
- {&__pyx_n_s_momentY, __pyx_k_momentY, sizeof(__pyx_k_momentY), 0, 0, 1, 1},
- {&__pyx_n_u_momentY, __pyx_k_momentY, sizeof(__pyx_k_momentY), 0, 1, 0, 1},
- {&__pyx_n_s_momentYY, __pyx_k_momentYY, sizeof(__pyx_k_momentYY), 0, 0, 1, 1},
- {&__pyx_n_u_momentYY, __pyx_k_momentYY, sizeof(__pyx_k_momentYY), 0, 1, 0, 1},
- {&__pyx_n_s_moveTo, __pyx_k_moveTo, sizeof(__pyx_k_moveTo), 0, 0, 1, 1},
- {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1},
- {&__pyx_n_s_p0, __pyx_k_p0, sizeof(__pyx_k_p0), 0, 0, 1, 1},
- {&__pyx_n_s_p1, __pyx_k_p1, sizeof(__pyx_k_p1), 0, 0, 1, 1},
- {&__pyx_n_s_p2, __pyx_k_p2, sizeof(__pyx_k_p2), 0, 0, 1, 1},
- {&__pyx_n_s_p3, __pyx_k_p3, sizeof(__pyx_k_p3), 0, 0, 1, 1},
- {&__pyx_n_s_prepare, __pyx_k_prepare, sizeof(__pyx_k_prepare), 0, 0, 1, 1},
- {&__pyx_n_s_printGreenPen, __pyx_k_printGreenPen, sizeof(__pyx_k_printGreenPen), 0, 0, 1, 1},
- {&__pyx_n_s_qCurveToOne, __pyx_k_qCurveToOne, sizeof(__pyx_k_qCurveToOne), 0, 0, 1, 1},
- {&__pyx_n_s_qualname, __pyx_k_qualname, sizeof(__pyx_k_qualname), 0, 0, 1, 1},
- {&__pyx_n_s_r0, __pyx_k_r0, sizeof(__pyx_k_r0), 0, 0, 1, 1},
- {&__pyx_n_s_r1, __pyx_k_r1, sizeof(__pyx_k_r1), 0, 0, 1, 1},
- {&__pyx_n_s_r10, __pyx_k_r10, sizeof(__pyx_k_r10), 0, 0, 1, 1},
- {&__pyx_n_s_r100, __pyx_k_r100, sizeof(__pyx_k_r100), 0, 0, 1, 1},
- {&__pyx_n_s_r101, __pyx_k_r101, sizeof(__pyx_k_r101), 0, 0, 1, 1},
- {&__pyx_n_s_r102, __pyx_k_r102, sizeof(__pyx_k_r102), 0, 0, 1, 1},
- {&__pyx_n_s_r103, __pyx_k_r103, sizeof(__pyx_k_r103), 0, 0, 1, 1},
- {&__pyx_n_s_r104, __pyx_k_r104, sizeof(__pyx_k_r104), 0, 0, 1, 1},
- {&__pyx_n_s_r105, __pyx_k_r105, sizeof(__pyx_k_r105), 0, 0, 1, 1},
- {&__pyx_n_s_r106, __pyx_k_r106, sizeof(__pyx_k_r106), 0, 0, 1, 1},
- {&__pyx_n_s_r107, __pyx_k_r107, sizeof(__pyx_k_r107), 0, 0, 1, 1},
- {&__pyx_n_s_r108, __pyx_k_r108, sizeof(__pyx_k_r108), 0, 0, 1, 1},
- {&__pyx_n_s_r109, __pyx_k_r109, sizeof(__pyx_k_r109), 0, 0, 1, 1},
- {&__pyx_n_s_r11, __pyx_k_r11, sizeof(__pyx_k_r11), 0, 0, 1, 1},
- {&__pyx_n_s_r110, __pyx_k_r110, sizeof(__pyx_k_r110), 0, 0, 1, 1},
- {&__pyx_n_s_r111, __pyx_k_r111, sizeof(__pyx_k_r111), 0, 0, 1, 1},
- {&__pyx_n_s_r112, __pyx_k_r112, sizeof(__pyx_k_r112), 0, 0, 1, 1},
- {&__pyx_n_s_r113, __pyx_k_r113, sizeof(__pyx_k_r113), 0, 0, 1, 1},
- {&__pyx_n_s_r114, __pyx_k_r114, sizeof(__pyx_k_r114), 0, 0, 1, 1},
- {&__pyx_n_s_r115, __pyx_k_r115, sizeof(__pyx_k_r115), 0, 0, 1, 1},
- {&__pyx_n_s_r116, __pyx_k_r116, sizeof(__pyx_k_r116), 0, 0, 1, 1},
- {&__pyx_n_s_r117, __pyx_k_r117, sizeof(__pyx_k_r117), 0, 0, 1, 1},
- {&__pyx_n_s_r118, __pyx_k_r118, sizeof(__pyx_k_r118), 0, 0, 1, 1},
- {&__pyx_n_s_r119, __pyx_k_r119, sizeof(__pyx_k_r119), 0, 0, 1, 1},
- {&__pyx_n_s_r12, __pyx_k_r12, sizeof(__pyx_k_r12), 0, 0, 1, 1},
- {&__pyx_n_s_r120, __pyx_k_r120, sizeof(__pyx_k_r120), 0, 0, 1, 1},
- {&__pyx_n_s_r121, __pyx_k_r121, sizeof(__pyx_k_r121), 0, 0, 1, 1},
- {&__pyx_n_s_r122, __pyx_k_r122, sizeof(__pyx_k_r122), 0, 0, 1, 1},
- {&__pyx_n_s_r123, __pyx_k_r123, sizeof(__pyx_k_r123), 0, 0, 1, 1},
- {&__pyx_n_s_r124, __pyx_k_r124, sizeof(__pyx_k_r124), 0, 0, 1, 1},
- {&__pyx_n_s_r125, __pyx_k_r125, sizeof(__pyx_k_r125), 0, 0, 1, 1},
- {&__pyx_n_s_r126, __pyx_k_r126, sizeof(__pyx_k_r126), 0, 0, 1, 1},
- {&__pyx_n_s_r127, __pyx_k_r127, sizeof(__pyx_k_r127), 0, 0, 1, 1},
- {&__pyx_n_s_r128, __pyx_k_r128, sizeof(__pyx_k_r128), 0, 0, 1, 1},
- {&__pyx_n_s_r129, __pyx_k_r129, sizeof(__pyx_k_r129), 0, 0, 1, 1},
- {&__pyx_n_s_r13, __pyx_k_r13, sizeof(__pyx_k_r13), 0, 0, 1, 1},
- {&__pyx_n_s_r130, __pyx_k_r130, sizeof(__pyx_k_r130), 0, 0, 1, 1},
- {&__pyx_n_s_r131, __pyx_k_r131, sizeof(__pyx_k_r131), 0, 0, 1, 1},
- {&__pyx_n_s_r132, __pyx_k_r132, sizeof(__pyx_k_r132), 0, 0, 1, 1},
- {&__pyx_n_s_r14, __pyx_k_r14, sizeof(__pyx_k_r14), 0, 0, 1, 1},
- {&__pyx_n_s_r15, __pyx_k_r15, sizeof(__pyx_k_r15), 0, 0, 1, 1},
- {&__pyx_n_s_r16, __pyx_k_r16, sizeof(__pyx_k_r16), 0, 0, 1, 1},
- {&__pyx_n_s_r17, __pyx_k_r17, sizeof(__pyx_k_r17), 0, 0, 1, 1},
- {&__pyx_n_s_r18, __pyx_k_r18, sizeof(__pyx_k_r18), 0, 0, 1, 1},
- {&__pyx_n_s_r19, __pyx_k_r19, sizeof(__pyx_k_r19), 0, 0, 1, 1},
- {&__pyx_n_s_r2, __pyx_k_r2, sizeof(__pyx_k_r2), 0, 0, 1, 1},
- {&__pyx_n_s_r20, __pyx_k_r20, sizeof(__pyx_k_r20), 0, 0, 1, 1},
- {&__pyx_n_s_r21, __pyx_k_r21, sizeof(__pyx_k_r21), 0, 0, 1, 1},
- {&__pyx_n_s_r22, __pyx_k_r22, sizeof(__pyx_k_r22), 0, 0, 1, 1},
- {&__pyx_n_s_r23, __pyx_k_r23, sizeof(__pyx_k_r23), 0, 0, 1, 1},
- {&__pyx_n_s_r24, __pyx_k_r24, sizeof(__pyx_k_r24), 0, 0, 1, 1},
- {&__pyx_n_s_r25, __pyx_k_r25, sizeof(__pyx_k_r25), 0, 0, 1, 1},
- {&__pyx_n_s_r26, __pyx_k_r26, sizeof(__pyx_k_r26), 0, 0, 1, 1},
- {&__pyx_n_s_r27, __pyx_k_r27, sizeof(__pyx_k_r27), 0, 0, 1, 1},
- {&__pyx_n_s_r28, __pyx_k_r28, sizeof(__pyx_k_r28), 0, 0, 1, 1},
- {&__pyx_n_s_r29, __pyx_k_r29, sizeof(__pyx_k_r29), 0, 0, 1, 1},
- {&__pyx_n_s_r3, __pyx_k_r3, sizeof(__pyx_k_r3), 0, 0, 1, 1},
- {&__pyx_n_s_r30, __pyx_k_r30, sizeof(__pyx_k_r30), 0, 0, 1, 1},
- {&__pyx_n_s_r31, __pyx_k_r31, sizeof(__pyx_k_r31), 0, 0, 1, 1},
- {&__pyx_n_s_r32, __pyx_k_r32, sizeof(__pyx_k_r32), 0, 0, 1, 1},
- {&__pyx_n_s_r33, __pyx_k_r33, sizeof(__pyx_k_r33), 0, 0, 1, 1},
- {&__pyx_n_s_r34, __pyx_k_r34, sizeof(__pyx_k_r34), 0, 0, 1, 1},
- {&__pyx_n_s_r35, __pyx_k_r35, sizeof(__pyx_k_r35), 0, 0, 1, 1},
- {&__pyx_n_s_r36, __pyx_k_r36, sizeof(__pyx_k_r36), 0, 0, 1, 1},
- {&__pyx_n_s_r37, __pyx_k_r37, sizeof(__pyx_k_r37), 0, 0, 1, 1},
- {&__pyx_n_s_r38, __pyx_k_r38, sizeof(__pyx_k_r38), 0, 0, 1, 1},
- {&__pyx_n_s_r39, __pyx_k_r39, sizeof(__pyx_k_r39), 0, 0, 1, 1},
- {&__pyx_n_s_r4, __pyx_k_r4, sizeof(__pyx_k_r4), 0, 0, 1, 1},
- {&__pyx_n_s_r40, __pyx_k_r40, sizeof(__pyx_k_r40), 0, 0, 1, 1},
- {&__pyx_n_s_r41, __pyx_k_r41, sizeof(__pyx_k_r41), 0, 0, 1, 1},
- {&__pyx_n_s_r42, __pyx_k_r42, sizeof(__pyx_k_r42), 0, 0, 1, 1},
- {&__pyx_n_s_r43, __pyx_k_r43, sizeof(__pyx_k_r43), 0, 0, 1, 1},
- {&__pyx_n_s_r44, __pyx_k_r44, sizeof(__pyx_k_r44), 0, 0, 1, 1},
- {&__pyx_n_s_r45, __pyx_k_r45, sizeof(__pyx_k_r45), 0, 0, 1, 1},
- {&__pyx_n_s_r46, __pyx_k_r46, sizeof(__pyx_k_r46), 0, 0, 1, 1},
- {&__pyx_n_s_r47, __pyx_k_r47, sizeof(__pyx_k_r47), 0, 0, 1, 1},
- {&__pyx_n_s_r48, __pyx_k_r48, sizeof(__pyx_k_r48), 0, 0, 1, 1},
- {&__pyx_n_s_r49, __pyx_k_r49, sizeof(__pyx_k_r49), 0, 0, 1, 1},
- {&__pyx_n_s_r5, __pyx_k_r5, sizeof(__pyx_k_r5), 0, 0, 1, 1},
- {&__pyx_n_s_r50, __pyx_k_r50, sizeof(__pyx_k_r50), 0, 0, 1, 1},
- {&__pyx_n_s_r51, __pyx_k_r51, sizeof(__pyx_k_r51), 0, 0, 1, 1},
- {&__pyx_n_s_r52, __pyx_k_r52, sizeof(__pyx_k_r52), 0, 0, 1, 1},
- {&__pyx_n_s_r53, __pyx_k_r53, sizeof(__pyx_k_r53), 0, 0, 1, 1},
- {&__pyx_n_s_r54, __pyx_k_r54, sizeof(__pyx_k_r54), 0, 0, 1, 1},
- {&__pyx_n_s_r55, __pyx_k_r55, sizeof(__pyx_k_r55), 0, 0, 1, 1},
- {&__pyx_n_s_r56, __pyx_k_r56, sizeof(__pyx_k_r56), 0, 0, 1, 1},
- {&__pyx_n_s_r57, __pyx_k_r57, sizeof(__pyx_k_r57), 0, 0, 1, 1},
- {&__pyx_n_s_r58, __pyx_k_r58, sizeof(__pyx_k_r58), 0, 0, 1, 1},
- {&__pyx_n_s_r59, __pyx_k_r59, sizeof(__pyx_k_r59), 0, 0, 1, 1},
- {&__pyx_n_s_r6, __pyx_k_r6, sizeof(__pyx_k_r6), 0, 0, 1, 1},
- {&__pyx_n_s_r60, __pyx_k_r60, sizeof(__pyx_k_r60), 0, 0, 1, 1},
- {&__pyx_n_s_r61, __pyx_k_r61, sizeof(__pyx_k_r61), 0, 0, 1, 1},
- {&__pyx_n_s_r62, __pyx_k_r62, sizeof(__pyx_k_r62), 0, 0, 1, 1},
- {&__pyx_n_s_r63, __pyx_k_r63, sizeof(__pyx_k_r63), 0, 0, 1, 1},
- {&__pyx_n_s_r64, __pyx_k_r64, sizeof(__pyx_k_r64), 0, 0, 1, 1},
- {&__pyx_n_s_r65, __pyx_k_r65, sizeof(__pyx_k_r65), 0, 0, 1, 1},
- {&__pyx_n_s_r66, __pyx_k_r66, sizeof(__pyx_k_r66), 0, 0, 1, 1},
- {&__pyx_n_s_r67, __pyx_k_r67, sizeof(__pyx_k_r67), 0, 0, 1, 1},
- {&__pyx_n_s_r68, __pyx_k_r68, sizeof(__pyx_k_r68), 0, 0, 1, 1},
- {&__pyx_n_s_r69, __pyx_k_r69, sizeof(__pyx_k_r69), 0, 0, 1, 1},
- {&__pyx_n_s_r7, __pyx_k_r7, sizeof(__pyx_k_r7), 0, 0, 1, 1},
- {&__pyx_n_s_r70, __pyx_k_r70, sizeof(__pyx_k_r70), 0, 0, 1, 1},
- {&__pyx_n_s_r71, __pyx_k_r71, sizeof(__pyx_k_r71), 0, 0, 1, 1},
- {&__pyx_n_s_r72, __pyx_k_r72, sizeof(__pyx_k_r72), 0, 0, 1, 1},
- {&__pyx_n_s_r73, __pyx_k_r73, sizeof(__pyx_k_r73), 0, 0, 1, 1},
- {&__pyx_n_s_r74, __pyx_k_r74, sizeof(__pyx_k_r74), 0, 0, 1, 1},
- {&__pyx_n_s_r75, __pyx_k_r75, sizeof(__pyx_k_r75), 0, 0, 1, 1},
- {&__pyx_n_s_r76, __pyx_k_r76, sizeof(__pyx_k_r76), 0, 0, 1, 1},
- {&__pyx_n_s_r77, __pyx_k_r77, sizeof(__pyx_k_r77), 0, 0, 1, 1},
- {&__pyx_n_s_r78, __pyx_k_r78, sizeof(__pyx_k_r78), 0, 0, 1, 1},
- {&__pyx_n_s_r79, __pyx_k_r79, sizeof(__pyx_k_r79), 0, 0, 1, 1},
- {&__pyx_n_s_r8, __pyx_k_r8, sizeof(__pyx_k_r8), 0, 0, 1, 1},
- {&__pyx_n_s_r80, __pyx_k_r80, sizeof(__pyx_k_r80), 0, 0, 1, 1},
- {&__pyx_n_s_r81, __pyx_k_r81, sizeof(__pyx_k_r81), 0, 0, 1, 1},
- {&__pyx_n_s_r82, __pyx_k_r82, sizeof(__pyx_k_r82), 0, 0, 1, 1},
- {&__pyx_n_s_r83, __pyx_k_r83, sizeof(__pyx_k_r83), 0, 0, 1, 1},
- {&__pyx_n_s_r84, __pyx_k_r84, sizeof(__pyx_k_r84), 0, 0, 1, 1},
- {&__pyx_n_s_r85, __pyx_k_r85, sizeof(__pyx_k_r85), 0, 0, 1, 1},
- {&__pyx_n_s_r86, __pyx_k_r86, sizeof(__pyx_k_r86), 0, 0, 1, 1},
- {&__pyx_n_s_r87, __pyx_k_r87, sizeof(__pyx_k_r87), 0, 0, 1, 1},
- {&__pyx_n_s_r88, __pyx_k_r88, sizeof(__pyx_k_r88), 0, 0, 1, 1},
- {&__pyx_n_s_r89, __pyx_k_r89, sizeof(__pyx_k_r89), 0, 0, 1, 1},
- {&__pyx_n_s_r9, __pyx_k_r9, sizeof(__pyx_k_r9), 0, 0, 1, 1},
- {&__pyx_n_s_r90, __pyx_k_r90, sizeof(__pyx_k_r90), 0, 0, 1, 1},
- {&__pyx_n_s_r91, __pyx_k_r91, sizeof(__pyx_k_r91), 0, 0, 1, 1},
- {&__pyx_n_s_r92, __pyx_k_r92, sizeof(__pyx_k_r92), 0, 0, 1, 1},
- {&__pyx_n_s_r93, __pyx_k_r93, sizeof(__pyx_k_r93), 0, 0, 1, 1},
- {&__pyx_n_s_r94, __pyx_k_r94, sizeof(__pyx_k_r94), 0, 0, 1, 1},
- {&__pyx_n_s_r95, __pyx_k_r95, sizeof(__pyx_k_r95), 0, 0, 1, 1},
- {&__pyx_n_s_r96, __pyx_k_r96, sizeof(__pyx_k_r96), 0, 0, 1, 1},
- {&__pyx_n_s_r97, __pyx_k_r97, sizeof(__pyx_k_r97), 0, 0, 1, 1},
- {&__pyx_n_s_r98, __pyx_k_r98, sizeof(__pyx_k_r98), 0, 0, 1, 1},
- {&__pyx_n_s_r99, __pyx_k_r99, sizeof(__pyx_k_r99), 0, 0, 1, 1},
- {&__pyx_n_s_self, __pyx_k_self, sizeof(__pyx_k_self), 0, 0, 1, 1},
- {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1},
- {&__pyx_n_s_x, __pyx_k_x, sizeof(__pyx_k_x), 0, 0, 1, 1},
- {&__pyx_n_s_x0, __pyx_k_x0, sizeof(__pyx_k_x0), 0, 0, 1, 1},
- {&__pyx_n_s_x1, __pyx_k_x1, sizeof(__pyx_k_x1), 0, 0, 1, 1},
- {&__pyx_n_s_x2, __pyx_k_x2, sizeof(__pyx_k_x2), 0, 0, 1, 1},
- {&__pyx_n_s_x3, __pyx_k_x3, sizeof(__pyx_k_x3), 0, 0, 1, 1},
- {&__pyx_n_s_y, __pyx_k_y, sizeof(__pyx_k_y), 0, 0, 1, 1},
- {&__pyx_n_s_y0, __pyx_k_y0, sizeof(__pyx_k_y0), 0, 0, 1, 1},
- {&__pyx_n_s_y1, __pyx_k_y1, sizeof(__pyx_k_y1), 0, 0, 1, 1},
- {&__pyx_n_s_y2, __pyx_k_y2, sizeof(__pyx_k_y2), 0, 0, 1, 1},
- {&__pyx_n_s_y3, __pyx_k_y3, sizeof(__pyx_k_y3), 0, 0, 1, 1},
- {0, 0, 0, 0, 0, 0, 0}
-};
-static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) {
- __pyx_builtin_AttributeError = __Pyx_GetBuiltinName(__pyx_n_s_AttributeError); if (!__pyx_builtin_AttributeError) __PYX_ERR(0, 7, __pyx_L1_error)
- __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_n_s_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(0, 7, __pyx_L1_error)
- return 0;
- __pyx_L1_error:;
- return -1;
-}
-
-static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0);
-
- /* "fontTools/pens/momentsPen.py":18
- *
- * class MomentsPen(BasePen):
- * def __init__(self, glyphset=None): # <<<<<<<<<<<<<<
- * BasePen.__init__(self, glyphset)
- *
- */
- __pyx_tuple_ = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_glyphset); if (unlikely(!__pyx_tuple_)) __PYX_ERR(0, 18, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple_);
- __Pyx_GIVEREF(__pyx_tuple_);
- __pyx_codeobj__2 = (PyObject*)__Pyx_PyCode_New(2, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple_, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_init, 18, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__2)) __PYX_ERR(0, 18, __pyx_L1_error)
- __pyx_tuple__3 = PyTuple_Pack(1, ((PyObject *)Py_None)); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(0, 18, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__3);
- __Pyx_GIVEREF(__pyx_tuple__3);
-
- /* "fontTools/pens/momentsPen.py":28
- * self.momentYY = 0
- *
- * def _moveTo(self, p0): # <<<<<<<<<<<<<<
- * self.__startPoint = p0
- *
- */
- __pyx_tuple__4 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_p0); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(0, 28, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__4);
- __Pyx_GIVEREF(__pyx_tuple__4);
- __pyx_codeobj__5 = (PyObject*)__Pyx_PyCode_New(2, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__4, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_moveTo, 28, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__5)) __PYX_ERR(0, 28, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":31
- * self.__startPoint = p0
- *
- * def _closePath(self): # <<<<<<<<<<<<<<
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint:
- */
- __pyx_tuple__6 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_p0); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(0, 31, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__6);
- __Pyx_GIVEREF(__pyx_tuple__6);
- __pyx_codeobj__7 = (PyObject*)__Pyx_PyCode_New(1, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__6, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_closePath, 31, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__7)) __PYX_ERR(0, 31, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":36
- * self._lineTo(self.__startPoint)
- *
- * def _endPath(self): # <<<<<<<<<<<<<<
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint:
- */
- __pyx_tuple__8 = PyTuple_Pack(2, __pyx_n_s_self, __pyx_n_s_p0); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(0, 36, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__8);
- __Pyx_GIVEREF(__pyx_tuple__8);
- __pyx_codeobj__9 = (PyObject*)__Pyx_PyCode_New(1, 0, 2, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__8, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_endPath, 36, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__9)) __PYX_ERR(0, 36, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":57
- * @cython.locals(x0=cython.double, y0=cython.double)
- * @cython.locals(x1=cython.double, y1=cython.double)
- * def _lineTo(self, p1): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
- __pyx_tuple__10 = PyTuple_Pack(19, __pyx_n_s_self, __pyx_n_s_p1, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x0, __pyx_n_s_y0, __pyx_n_s_r12, __pyx_n_s_r11, __pyx_n_s_r10, __pyx_n_s_r9, __pyx_n_s_r8, __pyx_n_s_r7, __pyx_n_s_r6, __pyx_n_s_r5, __pyx_n_s_r4, __pyx_n_s_r3, __pyx_n_s_r2, __pyx_n_s_r1, __pyx_n_s_r0); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(0, 57, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__10);
- __Pyx_GIVEREF(__pyx_tuple__10);
- __pyx_codeobj__11 = (PyObject*)__Pyx_PyCode_New(2, 0, 19, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__10, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_lineTo, 57, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__11)) __PYX_ERR(0, 57, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":159
- * @cython.locals(x1=cython.double, y1=cython.double)
- * @cython.locals(x2=cython.double, y2=cython.double)
- * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
- __pyx_tuple__12 = PyTuple_Pack(63, __pyx_n_s_self, __pyx_n_s_p1, __pyx_n_s_p2, __pyx_n_s_x2, __pyx_n_s_y2, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x0, __pyx_n_s_y0, __pyx_n_s_r53, __pyx_n_s_r52, __pyx_n_s_r51, __pyx_n_s_r50, __pyx_n_s_r49, __pyx_n_s_r48, __pyx_n_s_r47, __pyx_n_s_r46, __pyx_n_s_r45, __pyx_n_s_r44, __pyx_n_s_r43, __pyx_n_s_r42, __pyx_n_s_r41, __pyx_n_s_r40, __pyx_n_s_r39, __pyx_n_s_r38, __pyx_n_s_r37, __pyx_n_s_r36, __pyx_n_s_r35, __pyx_n_s_r34, __pyx_n_s_r33, __pyx_n_s_r32, __pyx_n_s_r31, __pyx_n_s_r30, __pyx_n_s_r29, __pyx_n_s_r28, __pyx_n_s_r27, __pyx_n_s_r26, __pyx_n_s_r25, __pyx_n_s_r24, __pyx_n_s_r23, __pyx_n_s_r22, __pyx_n_s_r21, __pyx_n_s_r20, __pyx_n_s_r19, __pyx_n_s_r18, __pyx_n_s_r17, __pyx_n_s_r16, __pyx_n_s_r15, __pyx_n_s_r14, __pyx_n_s_r13, __pyx_n_s_r12, __pyx_n_s_r11, __pyx_n_s_r10, __pyx_n_s_r9, __pyx_n_s_r8, __pyx_n_s_r7, __pyx_n_s_r6, __pyx_n_s_r5, __pyx_n_s_r4, __pyx_n_s_r3, __pyx_n_s_r2, __pyx_n_s_r1, __pyx_n_s_r0); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(0, 159, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__12);
- __Pyx_GIVEREF(__pyx_tuple__12);
- __pyx_codeobj__13 = (PyObject*)__Pyx_PyCode_New(3, 0, 63, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__12, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_qCurveToOne, 159, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__13)) __PYX_ERR(0, 159, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":450
- * @cython.locals(x2=cython.double, y2=cython.double)
- * @cython.locals(x3=cython.double, y3=cython.double)
- * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
- __pyx_tuple__14 = PyTuple_Pack(145, __pyx_n_s_self, __pyx_n_s_p1, __pyx_n_s_p2, __pyx_n_s_p3, __pyx_n_s_x3, __pyx_n_s_y3, __pyx_n_s_x2, __pyx_n_s_y2, __pyx_n_s_x1, __pyx_n_s_y1, __pyx_n_s_x0, __pyx_n_s_y0, __pyx_n_s_r132, __pyx_n_s_r131, __pyx_n_s_r130, __pyx_n_s_r129, __pyx_n_s_r128, __pyx_n_s_r127, __pyx_n_s_r126, __pyx_n_s_r125, __pyx_n_s_r124, __pyx_n_s_r123, __pyx_n_s_r122, __pyx_n_s_r121, __pyx_n_s_r120, __pyx_n_s_r119, __pyx_n_s_r118, __pyx_n_s_r117, __pyx_n_s_r116, __pyx_n_s_r115, __pyx_n_s_r114, __pyx_n_s_r113, __pyx_n_s_r112, __pyx_n_s_r111, __pyx_n_s_r110, __pyx_n_s_r109, __pyx_n_s_r108, __pyx_n_s_r107, __pyx_n_s_r106, __pyx_n_s_r105, __pyx_n_s_r104, __pyx_n_s_r103, __pyx_n_s_r102, __pyx_n_s_r101, __pyx_n_s_r100, __pyx_n_s_r99, __pyx_n_s_r98, __pyx_n_s_r97, __pyx_n_s_r96, __pyx_n_s_r95, __pyx_n_s_r94, __pyx_n_s_r93, __pyx_n_s_r92, __pyx_n_s_r91, __pyx_n_s_r90, __pyx_n_s_r89, __pyx_n_s_r88, __pyx_n_s_r87, __pyx_n_s_r86, __pyx_n_s_r85, __pyx_n_s_r84, __pyx_n_s_r83, __pyx_n_s_r82, __pyx_n_s_r81, __pyx_n_s_r80, __pyx_n_s_r79, __pyx_n_s_r78, __pyx_n_s_r77, __pyx_n_s_r76, __pyx_n_s_r75, __pyx_n_s_r74, __pyx_n_s_r73, __pyx_n_s_r72, __pyx_n_s_r71, __pyx_n_s_r70, __pyx_n_s_r69, __pyx_n_s_r68, __pyx_n_s_r67, __pyx_n_s_r66, __pyx_n_s_r65, __pyx_n_s_r64, __pyx_n_s_r63, __pyx_n_s_r62, __pyx_n_s_r61, __pyx_n_s_r60, __pyx_n_s_r59, __pyx_n_s_r58, __pyx_n_s_r57, __pyx_n_s_r56, __pyx_n_s_r55, __pyx_n_s_r54, __pyx_n_s_r53, __pyx_n_s_r52, __pyx_n_s_r51, __pyx_n_s_r50, __pyx_n_s_r49, __pyx_n_s_r48, __pyx_n_s_r47, __pyx_n_s_r46, __pyx_n_s_r45, __pyx_n_s_r44, __pyx_n_s_r43, __pyx_n_s_r42, __pyx_n_s_r41, __pyx_n_s_r40, __pyx_n_s_r39, __pyx_n_s_r38, __pyx_n_s_r37, __pyx_n_s_r36, __pyx_n_s_r35, __pyx_n_s_r34, __pyx_n_s_r33, __pyx_n_s_r32, __pyx_n_s_r31, __pyx_n_s_r30, __pyx_n_s_r29, __pyx_n_s_r28, __pyx_n_s_r27, __pyx_n_s_r26, __pyx_n_s_r25, __pyx_n_s_r24, __pyx_n_s_r23, __pyx_n_s_r22, __pyx_n_s_r21, __pyx_n_s_r20, __pyx_n_s_r19, __pyx_n_s_r18, __pyx_n_s_r17, __pyx_n_s_r16, __pyx_n_s_r15, __pyx_n_s_r14, __pyx_n_s_r13, __pyx_n_s_r12, __pyx_n_s_r11, __pyx_n_s_r10, __pyx_n_s_r9, __pyx_n_s_r8, __pyx_n_s_r7, __pyx_n_s_r6, __pyx_n_s_r5, __pyx_n_s_r4, __pyx_n_s_r3, __pyx_n_s_r2, __pyx_n_s_r1, __pyx_n_s_r0); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(0, 450, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__14);
- __Pyx_GIVEREF(__pyx_tuple__14);
- __pyx_codeobj__15 = (PyObject*)__Pyx_PyCode_New(4, 0, 145, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__14, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_Lib_fontTools_pens_momentsPen_py, __pyx_n_s_curveToOne, 450, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__15)) __PYX_ERR(0, 450, __pyx_L1_error)
-
- /* "fontTools/pens/momentsPen.py":875
- * "MomentsPen",
- * [
- * ("area", 1), # <<<<<<<<<<<<<<
- * ("momentX", x),
- * ("momentY", y),
- */
- __pyx_tuple__16 = PyTuple_Pack(2, __pyx_n_u_area, __pyx_int_1); if (unlikely(!__pyx_tuple__16)) __PYX_ERR(0, 875, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_tuple__16);
- __Pyx_GIVEREF(__pyx_tuple__16);
- __Pyx_RefNannyFinishContext();
- return 0;
- __pyx_L1_error:;
- __Pyx_RefNannyFinishContext();
- return -1;
-}
-
-static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) {
- if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error)
- __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error)
- __pyx_int_2 = PyInt_FromLong(2); if (unlikely(!__pyx_int_2)) __PYX_ERR(0, 1, __pyx_L1_error)
- return 0;
- __pyx_L1_error:;
- return -1;
-}
-
-static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/
-static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/
-
-static int __Pyx_modinit_global_init_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0);
- /*--- Global init code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-static int __Pyx_modinit_variable_export_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0);
- /*--- Variable export code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-static int __Pyx_modinit_function_export_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0);
- /*--- Function export code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-static int __Pyx_modinit_type_init_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0);
- /*--- Type init code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-static int __Pyx_modinit_type_import_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0);
- /*--- Type import code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-static int __Pyx_modinit_variable_import_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0);
- /*--- Variable import code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-static int __Pyx_modinit_function_import_code(void) {
- __Pyx_RefNannyDeclarations
- __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0);
- /*--- Function import code ---*/
- __Pyx_RefNannyFinishContext();
- return 0;
-}
-
-
-#ifndef CYTHON_NO_PYINIT_EXPORT
-#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC
-#elif PY_MAJOR_VERSION < 3
-#ifdef __cplusplus
-#define __Pyx_PyMODINIT_FUNC extern "C" void
-#else
-#define __Pyx_PyMODINIT_FUNC void
-#endif
-#else
-#ifdef __cplusplus
-#define __Pyx_PyMODINIT_FUNC extern "C" PyObject *
-#else
-#define __Pyx_PyMODINIT_FUNC PyObject *
-#endif
-#endif
-
-
-#if PY_MAJOR_VERSION < 3
-__Pyx_PyMODINIT_FUNC initmomentsPen(void) CYTHON_SMALL_CODE; /*proto*/
-__Pyx_PyMODINIT_FUNC initmomentsPen(void)
-#else
-__Pyx_PyMODINIT_FUNC PyInit_momentsPen(void) CYTHON_SMALL_CODE; /*proto*/
-__Pyx_PyMODINIT_FUNC PyInit_momentsPen(void)
-#if CYTHON_PEP489_MULTI_PHASE_INIT
-{
- return PyModuleDef_Init(&__pyx_moduledef);
-}
-static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) {
- #if PY_VERSION_HEX >= 0x030700A1
- static PY_INT64_T main_interpreter_id = -1;
- PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp);
- if (main_interpreter_id == -1) {
- main_interpreter_id = current_id;
- return (unlikely(current_id == -1)) ? -1 : 0;
- } else if (unlikely(main_interpreter_id != current_id))
- #else
- static PyInterpreterState *main_interpreter = NULL;
- PyInterpreterState *current_interpreter = PyThreadState_Get()->interp;
- if (!main_interpreter) {
- main_interpreter = current_interpreter;
- } else if (unlikely(main_interpreter != current_interpreter))
- #endif
- {
- PyErr_SetString(
- PyExc_ImportError,
- "Interpreter change detected - this module can only be loaded into one interpreter per process.");
- return -1;
- }
- return 0;
-}
-static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) {
- PyObject *value = PyObject_GetAttrString(spec, from_name);
- int result = 0;
- if (likely(value)) {
- if (allow_none || value != Py_None) {
- result = PyDict_SetItemString(moddict, to_name, value);
- }
- Py_DECREF(value);
- } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) {
- PyErr_Clear();
- } else {
- result = -1;
- }
- return result;
-}
-static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) {
- PyObject *module = NULL, *moddict, *modname;
- if (__Pyx_check_single_interpreter())
- return NULL;
- if (__pyx_m)
- return __Pyx_NewRef(__pyx_m);
- modname = PyObject_GetAttrString(spec, "name");
- if (unlikely(!modname)) goto bad;
- module = PyModule_NewObject(modname);
- Py_DECREF(modname);
- if (unlikely(!module)) goto bad;
- moddict = PyModule_GetDict(module);
- if (unlikely(!moddict)) goto bad;
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad;
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad;
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad;
- if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad;
- return module;
-bad:
- Py_XDECREF(module);
- return NULL;
-}
-
-
-static CYTHON_SMALL_CODE int __pyx_pymod_exec_momentsPen(PyObject *__pyx_pyinit_module)
-#endif
-#endif
-{
- PyObject *__pyx_t_1 = NULL;
- PyObject *__pyx_t_2 = NULL;
- PyObject *__pyx_t_3 = NULL;
- PyObject *__pyx_t_4 = NULL;
- PyObject *__pyx_t_5 = NULL;
- int __pyx_t_6;
- PyObject *__pyx_t_7 = NULL;
- PyObject *__pyx_t_8 = NULL;
- PyObject *__pyx_t_9 = NULL;
- int __pyx_t_10;
- PyObject *__pyx_t_11 = NULL;
- PyObject *__pyx_t_12 = NULL;
- int __pyx_lineno = 0;
- const char *__pyx_filename = NULL;
- int __pyx_clineno = 0;
- __Pyx_RefNannyDeclarations
- #if CYTHON_PEP489_MULTI_PHASE_INIT
- if (__pyx_m) {
- if (__pyx_m == __pyx_pyinit_module) return 0;
- PyErr_SetString(PyExc_RuntimeError, "Module 'momentsPen' has already been imported. Re-initialisation is not supported.");
- return -1;
- }
- #elif PY_MAJOR_VERSION >= 3
- if (__pyx_m) return __Pyx_NewRef(__pyx_m);
- #endif
- #if CYTHON_REFNANNY
-__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny");
-if (!__Pyx_RefNanny) {
- PyErr_Clear();
- __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny");
- if (!__Pyx_RefNanny)
- Py_FatalError("failed to import 'refnanny' module");
-}
-#endif
- __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_momentsPen(void)", 0);
- if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #ifdef __Pxy_PyFrame_Initialize_Offsets
- __Pxy_PyFrame_Initialize_Offsets();
- #endif
- __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)
- __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)
- __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)
- #ifdef __Pyx_CyFunction_USED
- if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- #ifdef __Pyx_FusedFunction_USED
- if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- #ifdef __Pyx_Coroutine_USED
- if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- #ifdef __Pyx_Generator_USED
- if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- #ifdef __Pyx_AsyncGen_USED
- if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- #ifdef __Pyx_StopAsyncIteration_USED
- if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- /*--- Library function declarations ---*/
- /*--- Threads initialization code ---*/
- #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS
- PyEval_InitThreads();
- #endif
- /*--- Module creation code ---*/
- #if CYTHON_PEP489_MULTI_PHASE_INIT
- __pyx_m = __pyx_pyinit_module;
- Py_INCREF(__pyx_m);
- #else
- #if PY_MAJOR_VERSION < 3
- __pyx_m = Py_InitModule4("momentsPen", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);
- #else
- __pyx_m = PyModule_Create(&__pyx_moduledef);
- #endif
- if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)
- Py_INCREF(__pyx_d);
- __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)
- Py_INCREF(__pyx_b);
- __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error)
- Py_INCREF(__pyx_cython_runtime);
- if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- /*--- Initialize various global constants etc. ---*/
- if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)
- if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
- if (__pyx_module_is_main_fontTools__pens__momentsPen) {
- if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- }
- #if PY_MAJOR_VERSION >= 3
- {
- PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)
- if (!PyDict_GetItemString(modules, "fontTools.pens.momentsPen")) {
- if (unlikely(PyDict_SetItemString(modules, "fontTools.pens.momentsPen", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)
- }
- }
- #endif
- /*--- Builtin init code ---*/
- if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- /*--- Constants init code ---*/
- if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- /*--- Global type/function init code ---*/
- (void)__Pyx_modinit_global_init_code();
- (void)__Pyx_modinit_variable_export_code();
- (void)__Pyx_modinit_function_export_code();
- (void)__Pyx_modinit_type_init_code();
- (void)__Pyx_modinit_type_import_code();
- (void)__Pyx_modinit_variable_import_code();
- (void)__Pyx_modinit_function_import_code();
- /*--- Execution code ---*/
- #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)
- if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- #endif
-
- /* "fontTools/pens/momentsPen.py":1
- * from fontTools.pens.basePen import BasePen, OpenContourError # <<<<<<<<<<<<<<
- *
- * try:
- */
- __pyx_t_1 = PyList_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_n_s_BasePen);
- __Pyx_GIVEREF(__pyx_n_s_BasePen);
- PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_BasePen);
- __Pyx_INCREF(__pyx_n_s_OpenContourError);
- __Pyx_GIVEREF(__pyx_n_s_OpenContourError);
- PyList_SET_ITEM(__pyx_t_1, 1, __pyx_n_s_OpenContourError);
- __pyx_t_2 = __Pyx_Import(__pyx_n_s_fontTools_pens_basePen, __pyx_t_1, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_BasePen); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_BasePen, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_OpenContourError); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_OpenContourError, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":3
- * from fontTools.pens.basePen import BasePen, OpenContourError
- *
- * try: # <<<<<<<<<<<<<<
- * import cython
- *
- */
- {
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5);
- __Pyx_XGOTREF(__pyx_t_3);
- __Pyx_XGOTREF(__pyx_t_4);
- __Pyx_XGOTREF(__pyx_t_5);
- /*try:*/ {
-
- /* "fontTools/pens/momentsPen.py":6
- * import cython
- *
- * COMPILED = cython.compiled # <<<<<<<<<<<<<<
- * except (AttributeError, ImportError):
- * # if cython not installed, use mock module with no-op decorators and types
- */
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_COMPILED, Py_True) < 0) __PYX_ERR(0, 6, __pyx_L2_error)
-
- /* "fontTools/pens/momentsPen.py":3
- * from fontTools.pens.basePen import BasePen, OpenContourError
- *
- * try: # <<<<<<<<<<<<<<
- * import cython
- *
- */
- }
- __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
- __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
- __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
- goto __pyx_L7_try_end;
- __pyx_L2_error:;
- __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
-
- /* "fontTools/pens/momentsPen.py":7
- *
- * COMPILED = cython.compiled
- * except (AttributeError, ImportError): # <<<<<<<<<<<<<<
- * # if cython not installed, use mock module with no-op decorators and types
- * from fontTools.misc import cython
- */
- __pyx_t_6 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_AttributeError) || __Pyx_PyErr_ExceptionMatches(__pyx_builtin_ImportError);
- if (__pyx_t_6) {
- __Pyx_AddTraceback("fontTools.pens.momentsPen", __pyx_clineno, __pyx_lineno, __pyx_filename);
- if (__Pyx_GetException(&__pyx_t_2, &__pyx_t_1, &__pyx_t_7) < 0) __PYX_ERR(0, 7, __pyx_L4_except_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_GOTREF(__pyx_t_7);
-
- /* "fontTools/pens/momentsPen.py":9
- * except (AttributeError, ImportError):
- * # if cython not installed, use mock module with no-op decorators and types
- * from fontTools.misc import cython # <<<<<<<<<<<<<<
- *
- * COMPILED = False
- */
- __pyx_t_8 = PyList_New(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 9, __pyx_L4_except_error)
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_INCREF(__pyx_n_s_cython);
- __Pyx_GIVEREF(__pyx_n_s_cython);
- PyList_SET_ITEM(__pyx_t_8, 0, __pyx_n_s_cython);
- __pyx_t_9 = __Pyx_Import(__pyx_n_s_fontTools_misc, __pyx_t_8, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 9, __pyx_L4_except_error)
- __Pyx_GOTREF(__pyx_t_9);
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- __pyx_t_8 = __Pyx_ImportFrom(__pyx_t_9, __pyx_n_s_cython); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 9, __pyx_L4_except_error)
- __Pyx_GOTREF(__pyx_t_8);
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_cython, __pyx_t_8) < 0) __PYX_ERR(0, 9, __pyx_L4_except_error)
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
-
- /* "fontTools/pens/momentsPen.py":11
- * from fontTools.misc import cython
- *
- * COMPILED = False # <<<<<<<<<<<<<<
- *
- *
- */
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_COMPILED, Py_False) < 0) __PYX_ERR(0, 11, __pyx_L4_except_error)
- __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
- goto __pyx_L3_exception_handled;
- }
- goto __pyx_L4_except_error;
- __pyx_L4_except_error:;
-
- /* "fontTools/pens/momentsPen.py":3
- * from fontTools.pens.basePen import BasePen, OpenContourError
- *
- * try: # <<<<<<<<<<<<<<
- * import cython
- *
- */
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_XGIVEREF(__pyx_t_5);
- __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5);
- goto __pyx_L1_error;
- __pyx_L3_exception_handled:;
- __Pyx_XGIVEREF(__pyx_t_3);
- __Pyx_XGIVEREF(__pyx_t_4);
- __Pyx_XGIVEREF(__pyx_t_5);
- __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5);
- __pyx_L7_try_end:;
- }
-
- /* "fontTools/pens/momentsPen.py":14
- *
- *
- * __all__ = ["MomentsPen"] # <<<<<<<<<<<<<<
- *
- *
- */
- __pyx_t_7 = PyList_New(1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 14, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_INCREF(__pyx_n_u_MomentsPen);
- __Pyx_GIVEREF(__pyx_n_u_MomentsPen);
- PyList_SET_ITEM(__pyx_t_7, 0, __pyx_n_u_MomentsPen);
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_all, __pyx_t_7) < 0) __PYX_ERR(0, 14, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
-
- /* "fontTools/pens/momentsPen.py":17
- *
- *
- * class MomentsPen(BasePen): # <<<<<<<<<<<<<<
- * def __init__(self, glyphset=None):
- * BasePen.__init__(self, glyphset)
- */
- __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_BasePen); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 17, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 17, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_7);
- PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_7);
- __pyx_t_7 = 0;
- __pyx_t_7 = __Pyx_CalculateMetaclass(NULL, __pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 17, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- __pyx_t_2 = __Pyx_Py3MetaclassPrepare(__pyx_t_7, __pyx_t_1, __pyx_n_s_MomentsPen, __pyx_n_s_MomentsPen, (PyObject *) NULL, __pyx_n_s_fontTools_pens_momentsPen, (PyObject *) NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 17, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
-
- /* "fontTools/pens/momentsPen.py":18
- *
- * class MomentsPen(BasePen):
- * def __init__(self, glyphset=None): # <<<<<<<<<<<<<<
- * BasePen.__init__(self, glyphset)
- *
- */
- __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_1__init__, 0, __pyx_n_s_MomentsPen___init, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__2)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 18, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_9, __pyx_tuple__3);
- if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_init, __pyx_t_9) < 0) __PYX_ERR(0, 18, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
-
- /* "fontTools/pens/momentsPen.py":28
- * self.momentYY = 0
- *
- * def _moveTo(self, p0): # <<<<<<<<<<<<<<
- * self.__startPoint = p0
- *
- */
- __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_3_moveTo, 0, __pyx_n_s_MomentsPen__moveTo, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__5)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 28, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_moveTo, __pyx_t_9) < 0) __PYX_ERR(0, 28, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
-
- /* "fontTools/pens/momentsPen.py":31
- * self.__startPoint = p0
- *
- * def _closePath(self): # <<<<<<<<<<<<<<
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint:
- */
- __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_5_closePath, 0, __pyx_n_s_MomentsPen__closePath, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__7)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 31, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_closePath, __pyx_t_9) < 0) __PYX_ERR(0, 31, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
-
- /* "fontTools/pens/momentsPen.py":36
- * self._lineTo(self.__startPoint)
- *
- * def _endPath(self): # <<<<<<<<<<<<<<
- * p0 = self._getCurrentPoint()
- * if p0 != self.__startPoint:
- */
- __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_7_endPath, 0, __pyx_n_s_MomentsPen__endPath, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__9)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 36, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_endPath, __pyx_t_9) < 0) __PYX_ERR(0, 36, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
-
- /* "fontTools/pens/momentsPen.py":57
- * @cython.locals(x0=cython.double, y0=cython.double)
- * @cython.locals(x1=cython.double, y1=cython.double)
- * def _lineTo(self, p1): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
- __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_9_lineTo, 0, __pyx_n_s_MomentsPen__lineTo, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__11)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 57, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_lineTo, __pyx_t_9) < 0) __PYX_ERR(0, 57, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
-
- /* "fontTools/pens/momentsPen.py":159
- * @cython.locals(x1=cython.double, y1=cython.double)
- * @cython.locals(x2=cython.double, y2=cython.double)
- * def _qCurveToOne(self, p1, p2): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
- __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_11_qCurveToOne, 0, __pyx_n_s_MomentsPen__qCurveToOne, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__13)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 159, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_qCurveToOne, __pyx_t_9) < 0) __PYX_ERR(0, 159, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
-
- /* "fontTools/pens/momentsPen.py":450
- * @cython.locals(x2=cython.double, y2=cython.double)
- * @cython.locals(x3=cython.double, y3=cython.double)
- * def _curveToOne(self, p1, p2, p3): # <<<<<<<<<<<<<<
- * x0, y0 = self._getCurrentPoint()
- * x1, y1 = p1
- */
- __pyx_t_9 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_4pens_10momentsPen_10MomentsPen_13_curveToOne, 0, __pyx_n_s_MomentsPen__curveToOne, NULL, __pyx_n_s_fontTools_pens_momentsPen, __pyx_d, ((PyObject *)__pyx_codeobj__15)); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 450, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- if (__Pyx_SetNameInClass(__pyx_t_2, __pyx_n_s_curveToOne, __pyx_t_9) < 0) __PYX_ERR(0, 450, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
-
- /* "fontTools/pens/momentsPen.py":17
- *
- *
- * class MomentsPen(BasePen): # <<<<<<<<<<<<<<
- * def __init__(self, glyphset=None):
- * BasePen.__init__(self, glyphset)
- */
- __pyx_t_9 = __Pyx_Py3ClassCreate(__pyx_t_7, __pyx_n_s_MomentsPen, __pyx_t_1, __pyx_t_2, NULL, 0, 0); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 17, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_MomentsPen, __pyx_t_9) < 0) __PYX_ERR(0, 17, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0;
- __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
-
- /* "fontTools/pens/momentsPen.py":869
- *
- *
- * if __name__ == "__main__": # <<<<<<<<<<<<<<
- * from fontTools.misc.symfont import x, y, printGreenPen
- *
- */
- __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_name); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 869, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_10 = (__Pyx_PyUnicode_Equals(__pyx_t_1, __pyx_n_u_main, Py_EQ)); if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 869, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- if (__pyx_t_10) {
-
- /* "fontTools/pens/momentsPen.py":870
- *
- * if __name__ == "__main__":
- * from fontTools.misc.symfont import x, y, printGreenPen # <<<<<<<<<<<<<<
- *
- * printGreenPen(
- */
- __pyx_t_1 = PyList_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_n_s_x);
- __Pyx_GIVEREF(__pyx_n_s_x);
- PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_x);
- __Pyx_INCREF(__pyx_n_s_y);
- __Pyx_GIVEREF(__pyx_n_s_y);
- PyList_SET_ITEM(__pyx_t_1, 1, __pyx_n_s_y);
- __Pyx_INCREF(__pyx_n_s_printGreenPen);
- __Pyx_GIVEREF(__pyx_n_s_printGreenPen);
- PyList_SET_ITEM(__pyx_t_1, 2, __pyx_n_s_printGreenPen);
- __pyx_t_7 = __Pyx_Import(__pyx_n_s_fontTools_misc_symfont, __pyx_t_1, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 870, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_7, __pyx_n_s_x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_x, __pyx_t_1) < 0) __PYX_ERR(0, 870, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_7, __pyx_n_s_y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_y, __pyx_t_1) < 0) __PYX_ERR(0, 870, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_7, __pyx_n_s_printGreenPen); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 870, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_printGreenPen, __pyx_t_1) < 0) __PYX_ERR(0, 870, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
-
- /* "fontTools/pens/momentsPen.py":872
- * from fontTools.misc.symfont import x, y, printGreenPen
- *
- * printGreenPen( # <<<<<<<<<<<<<<
- * "MomentsPen",
- * [
- */
- __Pyx_GetModuleGlobalName(__pyx_t_7, __pyx_n_s_printGreenPen); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 872, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_7);
-
- /* "fontTools/pens/momentsPen.py":876
- * [
- * ("area", 1),
- * ("momentX", x), # <<<<<<<<<<<<<<
- * ("momentY", y),
- * ("momentXX", x**2),
- */
- __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 876, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 876, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_2);
- __Pyx_INCREF(__pyx_n_u_momentX);
- __Pyx_GIVEREF(__pyx_n_u_momentX);
- PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_n_u_momentX);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "fontTools/pens/momentsPen.py":877
- * ("area", 1),
- * ("momentX", x),
- * ("momentY", y), # <<<<<<<<<<<<<<
- * ("momentXX", x**2),
- * ("momentXY", x * y),
- */
- __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_y); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 877, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_9 = PyTuple_New(2); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 877, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_9);
- __Pyx_INCREF(__pyx_n_u_momentY);
- __Pyx_GIVEREF(__pyx_n_u_momentY);
- PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_n_u_momentY);
- __Pyx_GIVEREF(__pyx_t_1);
- PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_t_1);
- __pyx_t_1 = 0;
-
- /* "fontTools/pens/momentsPen.py":878
- * ("momentX", x),
- * ("momentY", y),
- * ("momentXX", x**2), # <<<<<<<<<<<<<<
- * ("momentXY", x * y),
- * ("momentYY", y**2),
- */
- __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_x); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 878, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __pyx_t_8 = PyNumber_Power(__pyx_t_1, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 878, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
- __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 878, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_1);
- __Pyx_INCREF(__pyx_n_u_momentXX);
- __Pyx_GIVEREF(__pyx_n_u_momentXX);
- PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_n_u_momentXX);
- __Pyx_GIVEREF(__pyx_t_8);
- PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_8);
- __pyx_t_8 = 0;
-
- /* "fontTools/pens/momentsPen.py":879
- * ("momentY", y),
- * ("momentXX", x**2),
- * ("momentXY", x * y), # <<<<<<<<<<<<<<
- * ("momentYY", y**2),
- * ],
- */
- __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_n_s_x); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 879, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_n_s_y); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 879, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_11);
- __pyx_t_12 = PyNumber_Multiply(__pyx_t_8, __pyx_t_11); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 879, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_12);
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
- __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0;
- __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 879, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_11);
- __Pyx_INCREF(__pyx_n_u_momentXY);
- __Pyx_GIVEREF(__pyx_n_u_momentXY);
- PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_n_u_momentXY);
- __Pyx_GIVEREF(__pyx_t_12);
- PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_12);
- __pyx_t_12 = 0;
-
- /* "fontTools/pens/momentsPen.py":880
- * ("momentXX", x**2),
- * ("momentXY", x * y),
- * ("momentYY", y**2), # <<<<<<<<<<<<<<
- * ],
- * )
- */
- __Pyx_GetModuleGlobalName(__pyx_t_12, __pyx_n_s_y); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 880, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_12);
- __pyx_t_8 = PyNumber_Power(__pyx_t_12, __pyx_int_2, Py_None); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 880, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
- __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 880, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_12);
- __Pyx_INCREF(__pyx_n_u_momentYY);
- __Pyx_GIVEREF(__pyx_n_u_momentYY);
- PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_n_u_momentYY);
- __Pyx_GIVEREF(__pyx_t_8);
- PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_8);
- __pyx_t_8 = 0;
-
- /* "fontTools/pens/momentsPen.py":874
- * printGreenPen(
- * "MomentsPen",
- * [ # <<<<<<<<<<<<<<
- * ("area", 1),
- * ("momentX", x),
- */
- __pyx_t_8 = PyList_New(6); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 874, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_INCREF(__pyx_tuple__16);
- __Pyx_GIVEREF(__pyx_tuple__16);
- PyList_SET_ITEM(__pyx_t_8, 0, __pyx_tuple__16);
- __Pyx_GIVEREF(__pyx_t_2);
- PyList_SET_ITEM(__pyx_t_8, 1, __pyx_t_2);
- __Pyx_GIVEREF(__pyx_t_9);
- PyList_SET_ITEM(__pyx_t_8, 2, __pyx_t_9);
- __Pyx_GIVEREF(__pyx_t_1);
- PyList_SET_ITEM(__pyx_t_8, 3, __pyx_t_1);
- __Pyx_GIVEREF(__pyx_t_11);
- PyList_SET_ITEM(__pyx_t_8, 4, __pyx_t_11);
- __Pyx_GIVEREF(__pyx_t_12);
- PyList_SET_ITEM(__pyx_t_8, 5, __pyx_t_12);
- __pyx_t_2 = 0;
- __pyx_t_9 = 0;
- __pyx_t_1 = 0;
- __pyx_t_11 = 0;
- __pyx_t_12 = 0;
-
- /* "fontTools/pens/momentsPen.py":872
- * from fontTools.misc.symfont import x, y, printGreenPen
- *
- * printGreenPen( # <<<<<<<<<<<<<<
- * "MomentsPen",
- * [
- */
- __pyx_t_12 = PyTuple_New(2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 872, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_12);
- __Pyx_INCREF(__pyx_n_u_MomentsPen);
- __Pyx_GIVEREF(__pyx_n_u_MomentsPen);
- PyTuple_SET_ITEM(__pyx_t_12, 0, __pyx_n_u_MomentsPen);
- __Pyx_GIVEREF(__pyx_t_8);
- PyTuple_SET_ITEM(__pyx_t_12, 1, __pyx_t_8);
- __pyx_t_8 = 0;
- __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_12, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 872, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_8);
- __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
- __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0;
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
-
- /* "fontTools/pens/momentsPen.py":869
- *
- *
- * if __name__ == "__main__": # <<<<<<<<<<<<<<
- * from fontTools.misc.symfont import x, y, printGreenPen
- *
- */
- }
-
- /* "fontTools/pens/momentsPen.py":1
- * from fontTools.pens.basePen import BasePen, OpenContourError # <<<<<<<<<<<<<<
- *
- * try:
- */
- __pyx_t_8 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 1, __pyx_L1_error)
- __Pyx_GOTREF(__pyx_t_8);
- if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_8) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
- __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
-
- /*--- Wrapped vars code ---*/
-
- goto __pyx_L0;
- __pyx_L1_error:;
- __Pyx_XDECREF(__pyx_t_1);
- __Pyx_XDECREF(__pyx_t_2);
- __Pyx_XDECREF(__pyx_t_7);
- __Pyx_XDECREF(__pyx_t_8);
- __Pyx_XDECREF(__pyx_t_9);
- __Pyx_XDECREF(__pyx_t_11);
- __Pyx_XDECREF(__pyx_t_12);
- if (__pyx_m) {
- if (__pyx_d) {
- __Pyx_AddTraceback("init fontTools.pens.momentsPen", __pyx_clineno, __pyx_lineno, __pyx_filename);
- }
- Py_CLEAR(__pyx_m);
- } else if (!PyErr_Occurred()) {
- PyErr_SetString(PyExc_ImportError, "init fontTools.pens.momentsPen");
- }
- __pyx_L0:;
- __Pyx_RefNannyFinishContext();
- #if CYTHON_PEP489_MULTI_PHASE_INIT
- return (__pyx_m != NULL) ? 0 : -1;
- #elif PY_MAJOR_VERSION >= 3
- return __pyx_m;
- #else
- return;
- #endif
-}
-
-/* --- Runtime support code --- */
-/* Refnanny */
-#if CYTHON_REFNANNY
-static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) {
- PyObject *m = NULL, *p = NULL;
- void *r = NULL;
- m = PyImport_ImportModule(modname);
- if (!m) goto end;
- p = PyObject_GetAttrString(m, "RefNannyAPI");
- if (!p) goto end;
- r = PyLong_AsVoidPtr(p);
-end:
- Py_XDECREF(p);
- Py_XDECREF(m);
- return (__Pyx_RefNannyAPIStruct *)r;
-}
-#endif
-
-/* PyObjectGetAttrStr */
-#if CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) {
- PyTypeObject* tp = Py_TYPE(obj);
- if (likely(tp->tp_getattro))
- return tp->tp_getattro(obj, attr_name);
-#if PY_MAJOR_VERSION < 3
- if (likely(tp->tp_getattr))
- return tp->tp_getattr(obj, PyString_AS_STRING(attr_name));
-#endif
- return PyObject_GetAttr(obj, attr_name);
-}
-#endif
-
-/* GetBuiltinName */
-static PyObject *__Pyx_GetBuiltinName(PyObject *name) {
- PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name);
- if (unlikely(!result)) {
- PyErr_Format(PyExc_NameError,
-#if PY_MAJOR_VERSION >= 3
- "name '%U' is not defined", name);
-#else
- "name '%.200s' is not defined", PyString_AS_STRING(name));
-#endif
- }
- return result;
-}
-
-/* RaiseDoubleKeywords */
-static void __Pyx_RaiseDoubleKeywordsError(
- const char* func_name,
- PyObject* kw_name)
-{
- PyErr_Format(PyExc_TypeError,
- #if PY_MAJOR_VERSION >= 3
- "%s() got multiple values for keyword argument '%U'", func_name, kw_name);
- #else
- "%s() got multiple values for keyword argument '%s'", func_name,
- PyString_AsString(kw_name));
- #endif
-}
-
-/* ParseKeywords */
-static int __Pyx_ParseOptionalKeywords(
- PyObject *kwds,
- PyObject **argnames[],
- PyObject *kwds2,
- PyObject *values[],
- Py_ssize_t num_pos_args,
- const char* function_name)
-{
- PyObject *key = 0, *value = 0;
- Py_ssize_t pos = 0;
- PyObject*** name;
- PyObject*** first_kw_arg = argnames + num_pos_args;
- while (PyDict_Next(kwds, &pos, &key, &value)) {
- name = first_kw_arg;
- while (*name && (**name != key)) name++;
- if (*name) {
- values[name-argnames] = value;
- continue;
- }
- name = first_kw_arg;
- #if PY_MAJOR_VERSION < 3
- if (likely(PyString_Check(key))) {
- while (*name) {
- if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key))
- && _PyString_Eq(**name, key)) {
- values[name-argnames] = value;
- break;
- }
- name++;
- }
- if (*name) continue;
- else {
- PyObject*** argname = argnames;
- while (argname != first_kw_arg) {
- if ((**argname == key) || (
- (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key))
- && _PyString_Eq(**argname, key))) {
- goto arg_passed_twice;
- }
- argname++;
- }
- }
- } else
- #endif
- if (likely(PyUnicode_Check(key))) {
- while (*name) {
- int cmp = (**name == key) ? 0 :
- #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3
- (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 :
- #endif
- PyUnicode_Compare(**name, key);
- if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;
- if (cmp == 0) {
- values[name-argnames] = value;
- break;
- }
- name++;
- }
- if (*name) continue;
- else {
- PyObject*** argname = argnames;
- while (argname != first_kw_arg) {
- int cmp = (**argname == key) ? 0 :
- #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3
- (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 :
- #endif
- PyUnicode_Compare(**argname, key);
- if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad;
- if (cmp == 0) goto arg_passed_twice;
- argname++;
- }
- }
- } else
- goto invalid_keyword_type;
- if (kwds2) {
- if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad;
- } else {
- goto invalid_keyword;
- }
- }
- return 0;
-arg_passed_twice:
- __Pyx_RaiseDoubleKeywordsError(function_name, key);
- goto bad;
-invalid_keyword_type:
- PyErr_Format(PyExc_TypeError,
- "%.200s() keywords must be strings", function_name);
- goto bad;
-invalid_keyword:
- PyErr_Format(PyExc_TypeError,
- #if PY_MAJOR_VERSION < 3
- "%.200s() got an unexpected keyword argument '%.200s'",
- function_name, PyString_AsString(key));
- #else
- "%s() got an unexpected keyword argument '%U'",
- function_name, key);
- #endif
-bad:
- return -1;
-}
-
-/* RaiseArgTupleInvalid */
-static void __Pyx_RaiseArgtupleInvalid(
- const char* func_name,
- int exact,
- Py_ssize_t num_min,
- Py_ssize_t num_max,
- Py_ssize_t num_found)
-{
- Py_ssize_t num_expected;
- const char *more_or_less;
- if (num_found < num_min) {
- num_expected = num_min;
- more_or_less = "at least";
- } else {
- num_expected = num_max;
- more_or_less = "at most";
- }
- if (exact) {
- more_or_less = "exactly";
- }
- PyErr_Format(PyExc_TypeError,
- "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)",
- func_name, more_or_less, num_expected,
- (num_expected == 1) ? "" : "s", num_found);
-}
-
-/* PyDictVersioning */
-#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) {
- PyObject *dict = Py_TYPE(obj)->tp_dict;
- return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0;
-}
-static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) {
- PyObject **dictptr = NULL;
- Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset;
- if (offset) {
-#if CYTHON_COMPILING_IN_CPYTHON
- dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj);
-#else
- dictptr = _PyObject_GetDictPtr(obj);
-#endif
- }
- return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0;
-}
-static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) {
- PyObject *dict = Py_TYPE(obj)->tp_dict;
- if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict)))
- return 0;
- return obj_dict_version == __Pyx_get_object_dict_version(obj);
-}
-#endif
-
-/* GetModuleGlobalName */
-#if CYTHON_USE_DICT_VERSIONS
-static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value)
-#else
-static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name)
-#endif
-{
- PyObject *result;
-#if !CYTHON_AVOID_BORROWED_REFS
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1
- result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash);
- __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)
- if (likely(result)) {
- return __Pyx_NewRef(result);
- } else if (unlikely(PyErr_Occurred())) {
- return NULL;
- }
-#else
- result = PyDict_GetItem(__pyx_d, name);
- __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)
- if (likely(result)) {
- return __Pyx_NewRef(result);
- }
-#endif
-#else
- result = PyObject_GetItem(__pyx_d, name);
- __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version)
- if (likely(result)) {
- return __Pyx_NewRef(result);
- }
- PyErr_Clear();
-#endif
- return __Pyx_GetBuiltinName(name);
-}
-
-/* PyFunctionFastCall */
-#if CYTHON_FAST_PYCALL
-static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na,
- PyObject *globals) {
- PyFrameObject *f;
- PyThreadState *tstate = __Pyx_PyThreadState_Current;
- PyObject **fastlocals;
- Py_ssize_t i;
- PyObject *result;
- assert(globals != NULL);
- /* XXX Perhaps we should create a specialized
- PyFrame_New() that doesn't take locals, but does
- take builtins without sanity checking them.
- */
- assert(tstate != NULL);
- f = PyFrame_New(tstate, co, globals, NULL);
- if (f == NULL) {
- return NULL;
- }
- fastlocals = __Pyx_PyFrame_GetLocalsplus(f);
- for (i = 0; i < na; i++) {
- Py_INCREF(*args);
- fastlocals[i] = *args++;
- }
- result = PyEval_EvalFrameEx(f,0);
- ++tstate->recursion_depth;
- Py_DECREF(f);
- --tstate->recursion_depth;
- return result;
-}
-#if 1 || PY_VERSION_HEX < 0x030600B1
-static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) {
- PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func);
- PyObject *globals = PyFunction_GET_GLOBALS(func);
- PyObject *argdefs = PyFunction_GET_DEFAULTS(func);
- PyObject *closure;
-#if PY_MAJOR_VERSION >= 3
- PyObject *kwdefs;
-#endif
- PyObject *kwtuple, **k;
- PyObject **d;
- Py_ssize_t nd;
- Py_ssize_t nk;
- PyObject *result;
- assert(kwargs == NULL || PyDict_Check(kwargs));
- nk = kwargs ? PyDict_Size(kwargs) : 0;
- if (Py_EnterRecursiveCall((char*)" while calling a Python object")) {
- return NULL;
- }
- if (
-#if PY_MAJOR_VERSION >= 3
- co->co_kwonlyargcount == 0 &&
-#endif
- likely(kwargs == NULL || nk == 0) &&
- co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) {
- if (argdefs == NULL && co->co_argcount == nargs) {
- result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals);
- goto done;
- }
- else if (nargs == 0 && argdefs != NULL
- && co->co_argcount == Py_SIZE(argdefs)) {
- /* function called with no arguments, but all parameters have
- a default value: use default values as arguments .*/
- args = &PyTuple_GET_ITEM(argdefs, 0);
- result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals);
- goto done;
- }
- }
- if (kwargs != NULL) {
- Py_ssize_t pos, i;
- kwtuple = PyTuple_New(2 * nk);
- if (kwtuple == NULL) {
- result = NULL;
- goto done;
- }
- k = &PyTuple_GET_ITEM(kwtuple, 0);
- pos = i = 0;
- while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) {
- Py_INCREF(k[i]);
- Py_INCREF(k[i+1]);
- i += 2;
- }
- nk = i / 2;
- }
- else {
- kwtuple = NULL;
- k = NULL;
- }
- closure = PyFunction_GET_CLOSURE(func);
-#if PY_MAJOR_VERSION >= 3
- kwdefs = PyFunction_GET_KW_DEFAULTS(func);
-#endif
- if (argdefs != NULL) {
- d = &PyTuple_GET_ITEM(argdefs, 0);
- nd = Py_SIZE(argdefs);
- }
- else {
- d = NULL;
- nd = 0;
- }
-#if PY_MAJOR_VERSION >= 3
- result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL,
- args, (int)nargs,
- k, (int)nk,
- d, (int)nd, kwdefs, closure);
-#else
- result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL,
- args, (int)nargs,
- k, (int)nk,
- d, (int)nd, closure);
-#endif
- Py_XDECREF(kwtuple);
-done:
- Py_LeaveRecursiveCall();
- return result;
-}
-#endif
-#endif
-
-/* PyCFunctionFastCall */
-#if CYTHON_FAST_PYCCALL
-static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) {
- PyCFunctionObject *func = (PyCFunctionObject*)func_obj;
- PyCFunction meth = PyCFunction_GET_FUNCTION(func);
- PyObject *self = PyCFunction_GET_SELF(func);
- int flags = PyCFunction_GET_FLAGS(func);
- assert(PyCFunction_Check(func));
- assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS)));
- assert(nargs >= 0);
- assert(nargs == 0 || args != NULL);
- /* _PyCFunction_FastCallDict() must not be called with an exception set,
- because it may clear it (directly or indirectly) and so the
- caller loses its exception */
- assert(!PyErr_Occurred());
- if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) {
- return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL);
- } else {
- return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs);
- }
-}
-#endif
-
-/* PyObjectCall */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) {
- PyObject *result;
- ternaryfunc call = Py_TYPE(func)->tp_call;
- if (unlikely(!call))
- return PyObject_Call(func, arg, kw);
- if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object")))
- return NULL;
- result = (*call)(func, arg, kw);
- Py_LeaveRecursiveCall();
- if (unlikely(!result) && unlikely(!PyErr_Occurred())) {
- PyErr_SetString(
- PyExc_SystemError,
- "NULL result without error in PyObject_Call");
- }
- return result;
-}
-#endif
-
-/* PyObjectSetAttrStr */
-#if CYTHON_USE_TYPE_SLOTS
-static CYTHON_INLINE int __Pyx_PyObject_SetAttrStr(PyObject* obj, PyObject* attr_name, PyObject* value) {
- PyTypeObject* tp = Py_TYPE(obj);
- if (likely(tp->tp_setattro))
- return tp->tp_setattro(obj, attr_name, value);
-#if PY_MAJOR_VERSION < 3
- if (likely(tp->tp_setattr))
- return tp->tp_setattr(obj, PyString_AS_STRING(attr_name), value);
-#endif
- return PyObject_SetAttr(obj, attr_name, value);
-}
-#endif
-
-/* PyObjectCallMethO */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) {
- PyObject *self, *result;
- PyCFunction cfunc;
- cfunc = PyCFunction_GET_FUNCTION(func);
- self = PyCFunction_GET_SELF(func);
- if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object")))
- return NULL;
- result = cfunc(self, arg);
- Py_LeaveRecursiveCall();
- if (unlikely(!result) && unlikely(!PyErr_Occurred())) {
- PyErr_SetString(
- PyExc_SystemError,
- "NULL result without error in PyObject_Call");
- }
- return result;
-}
-#endif
-
-/* PyObjectCallNoArg */
-#if CYTHON_COMPILING_IN_CPYTHON
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) {
-#if CYTHON_FAST_PYCALL
- if (PyFunction_Check(func)) {
- return __Pyx_PyFunction_FastCall(func, NULL, 0);
- }
-#endif
-#if defined(__Pyx_CyFunction_USED) && defined(NDEBUG)
- if (likely(PyCFunction_Check(func) || __Pyx_CyFunction_Check(func)))
-#else
- if (likely(PyCFunction_Check(func)))
-#endif
- {
- if (likely(PyCFunction_GET_FLAGS(func) & METH_NOARGS)) {
- return __Pyx_PyObject_CallMethO(func, NULL);
- }
- }
- return __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL);
-}
-#endif
-
-/* PyObjectCallOneArg */
-#if CYTHON_COMPILING_IN_CPYTHON
-static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) {
- PyObject *result;
- PyObject *args = PyTuple_New(1);
- if (unlikely(!args)) return NULL;
- Py_INCREF(arg);
- PyTuple_SET_ITEM(args, 0, arg);
- result = __Pyx_PyObject_Call(func, args, NULL);
- Py_DECREF(args);
- return result;
-}
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) {
-#if CYTHON_FAST_PYCALL
- if (PyFunction_Check(func)) {
- return __Pyx_PyFunction_FastCall(func, &arg, 1);
- }
-#endif
- if (likely(PyCFunction_Check(func))) {
- if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) {
- return __Pyx_PyObject_CallMethO(func, arg);
-#if CYTHON_FAST_PYCCALL
- } else if (__Pyx_PyFastCFunction_Check(func)) {
- return __Pyx_PyCFunction_FastCall(func, &arg, 1);
-#endif
- }
- }
- return __Pyx__PyObject_CallOneArg(func, arg);
-}
-#else
-static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) {
- PyObject *result;
- PyObject *args = PyTuple_Pack(1, arg);
- if (unlikely(!args)) return NULL;
- result = __Pyx_PyObject_Call(func, args, NULL);
- Py_DECREF(args);
- return result;
-}
-#endif
-
-/* PyObjectCall2Args */
-static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) {
- PyObject *args, *result = NULL;
- #if CYTHON_FAST_PYCALL
- if (PyFunction_Check(function)) {
- PyObject *args[2] = {arg1, arg2};
- return __Pyx_PyFunction_FastCall(function, args, 2);
- }
- #endif
- #if CYTHON_FAST_PYCCALL
- if (__Pyx_PyFastCFunction_Check(function)) {
- PyObject *args[2] = {arg1, arg2};
- return __Pyx_PyCFunction_FastCall(function, args, 2);
- }
- #endif
- args = PyTuple_New(2);
- if (unlikely(!args)) goto done;
- Py_INCREF(arg1);
- PyTuple_SET_ITEM(args, 0, arg1);
- Py_INCREF(arg2);
- PyTuple_SET_ITEM(args, 1, arg2);
- Py_INCREF(function);
- result = __Pyx_PyObject_Call(function, args, NULL);
- Py_DECREF(args);
- Py_DECREF(function);
-done:
- return result;
-}
-
-/* PyErrFetchRestore */
-#if CYTHON_FAST_THREAD_STATE
-static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {
- PyObject *tmp_type, *tmp_value, *tmp_tb;
- tmp_type = tstate->curexc_type;
- tmp_value = tstate->curexc_value;
- tmp_tb = tstate->curexc_traceback;
- tstate->curexc_type = type;
- tstate->curexc_value = value;
- tstate->curexc_traceback = tb;
- Py_XDECREF(tmp_type);
- Py_XDECREF(tmp_value);
- Py_XDECREF(tmp_tb);
-}
-static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {
- *type = tstate->curexc_type;
- *value = tstate->curexc_value;
- *tb = tstate->curexc_traceback;
- tstate->curexc_type = 0;
- tstate->curexc_value = 0;
- tstate->curexc_traceback = 0;
-}
-#endif
-
-/* RaiseException */
-#if PY_MAJOR_VERSION < 3
-static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb,
- CYTHON_UNUSED PyObject *cause) {
- __Pyx_PyThreadState_declare
- Py_XINCREF(type);
- if (!value || value == Py_None)
- value = NULL;
- else
- Py_INCREF(value);
- if (!tb || tb == Py_None)
- tb = NULL;
- else {
- Py_INCREF(tb);
- if (!PyTraceBack_Check(tb)) {
- PyErr_SetString(PyExc_TypeError,
- "raise: arg 3 must be a traceback or None");
- goto raise_error;
- }
- }
- if (PyType_Check(type)) {
-#if CYTHON_COMPILING_IN_PYPY
- if (!value) {
- Py_INCREF(Py_None);
- value = Py_None;
- }
-#endif
- PyErr_NormalizeException(&type, &value, &tb);
- } else {
- if (value) {
- PyErr_SetString(PyExc_TypeError,
- "instance exception may not have a separate value");
- goto raise_error;
- }
- value = type;
- type = (PyObject*) Py_TYPE(type);
- Py_INCREF(type);
- if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) {
- PyErr_SetString(PyExc_TypeError,
- "raise: exception class must be a subclass of BaseException");
- goto raise_error;
- }
- }
- __Pyx_PyThreadState_assign
- __Pyx_ErrRestore(type, value, tb);
- return;
-raise_error:
- Py_XDECREF(value);
- Py_XDECREF(type);
- Py_XDECREF(tb);
- return;
-}
-#else
-static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) {
- PyObject* owned_instance = NULL;
- if (tb == Py_None) {
- tb = 0;
- } else if (tb && !PyTraceBack_Check(tb)) {
- PyErr_SetString(PyExc_TypeError,
- "raise: arg 3 must be a traceback or None");
- goto bad;
- }
- if (value == Py_None)
- value = 0;
- if (PyExceptionInstance_Check(type)) {
- if (value) {
- PyErr_SetString(PyExc_TypeError,
- "instance exception may not have a separate value");
- goto bad;
- }
- value = type;
- type = (PyObject*) Py_TYPE(value);
- } else if (PyExceptionClass_Check(type)) {
- PyObject *instance_class = NULL;
- if (value && PyExceptionInstance_Check(value)) {
- instance_class = (PyObject*) Py_TYPE(value);
- if (instance_class != type) {
- int is_subclass = PyObject_IsSubclass(instance_class, type);
- if (!is_subclass) {
- instance_class = NULL;
- } else if (unlikely(is_subclass == -1)) {
- goto bad;
- } else {
- type = instance_class;
- }
- }
- }
- if (!instance_class) {
- PyObject *args;
- if (!value)
- args = PyTuple_New(0);
- else if (PyTuple_Check(value)) {
- Py_INCREF(value);
- args = value;
- } else
- args = PyTuple_Pack(1, value);
- if (!args)
- goto bad;
- owned_instance = PyObject_Call(type, args, NULL);
- Py_DECREF(args);
- if (!owned_instance)
- goto bad;
- value = owned_instance;
- if (!PyExceptionInstance_Check(value)) {
- PyErr_Format(PyExc_TypeError,
- "calling %R should have returned an instance of "
- "BaseException, not %R",
- type, Py_TYPE(value));
- goto bad;
- }
- }
- } else {
- PyErr_SetString(PyExc_TypeError,
- "raise: exception class must be a subclass of BaseException");
- goto bad;
- }
- if (cause) {
- PyObject *fixed_cause;
- if (cause == Py_None) {
- fixed_cause = NULL;
- } else if (PyExceptionClass_Check(cause)) {
- fixed_cause = PyObject_CallObject(cause, NULL);
- if (fixed_cause == NULL)
- goto bad;
- } else if (PyExceptionInstance_Check(cause)) {
- fixed_cause = cause;
- Py_INCREF(fixed_cause);
- } else {
- PyErr_SetString(PyExc_TypeError,
- "exception causes must derive from "
- "BaseException");
- goto bad;
- }
- PyException_SetCause(value, fixed_cause);
- }
- PyErr_SetObject(type, value);
- if (tb) {
-#if CYTHON_FAST_THREAD_STATE
- PyThreadState *tstate = __Pyx_PyThreadState_Current;
- PyObject* tmp_tb = tstate->curexc_traceback;
- if (tb != tmp_tb) {
- Py_INCREF(tb);
- tstate->curexc_traceback = tb;
- Py_XDECREF(tmp_tb);
- }
-#else
- PyObject *tmp_type, *tmp_value, *tmp_tb;
- PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb);
- Py_INCREF(tb);
- PyErr_Restore(tmp_type, tmp_value, tb);
- Py_XDECREF(tmp_tb);
-#endif
- }
-bad:
- Py_XDECREF(owned_instance);
- return;
-}
-#endif
-
-/* RaiseTooManyValuesToUnpack */
-static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) {
- PyErr_Format(PyExc_ValueError,
- "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected);
-}
-
-/* RaiseNeedMoreValuesToUnpack */
-static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) {
- PyErr_Format(PyExc_ValueError,
- "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack",
- index, (index == 1) ? "" : "s");
-}
-
-/* IterFinish */
-static CYTHON_INLINE int __Pyx_IterFinish(void) {
-#if CYTHON_FAST_THREAD_STATE
- PyThreadState *tstate = __Pyx_PyThreadState_Current;
- PyObject* exc_type = tstate->curexc_type;
- if (unlikely(exc_type)) {
- if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) {
- PyObject *exc_value, *exc_tb;
- exc_value = tstate->curexc_value;
- exc_tb = tstate->curexc_traceback;
- tstate->curexc_type = 0;
- tstate->curexc_value = 0;
- tstate->curexc_traceback = 0;
- Py_DECREF(exc_type);
- Py_XDECREF(exc_value);
- Py_XDECREF(exc_tb);
- return 0;
- } else {
- return -1;
- }
- }
- return 0;
-#else
- if (unlikely(PyErr_Occurred())) {
- if (likely(PyErr_ExceptionMatches(PyExc_StopIteration))) {
- PyErr_Clear();
- return 0;
- } else {
- return -1;
- }
- }
- return 0;
-#endif
-}
-
-/* UnpackItemEndCheck */
-static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected) {
- if (unlikely(retval)) {
- Py_DECREF(retval);
- __Pyx_RaiseTooManyValuesError(expected);
- return -1;
- }
- return __Pyx_IterFinish();
-}
-
-/* Import */
-static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) {
- PyObject *empty_list = 0;
- PyObject *module = 0;
- PyObject *global_dict = 0;
- PyObject *empty_dict = 0;
- PyObject *list;
- #if PY_MAJOR_VERSION < 3
- PyObject *py_import;
- py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import);
- if (!py_import)
- goto bad;
- #endif
- if (from_list)
- list = from_list;
- else {
- empty_list = PyList_New(0);
- if (!empty_list)
- goto bad;
- list = empty_list;
- }
- global_dict = PyModule_GetDict(__pyx_m);
- if (!global_dict)
- goto bad;
- empty_dict = PyDict_New();
- if (!empty_dict)
- goto bad;
- {
- #if PY_MAJOR_VERSION >= 3
- if (level == -1) {
- if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) {
- module = PyImport_ImportModuleLevelObject(
- name, global_dict, empty_dict, list, 1);
- if (!module) {
- if (!PyErr_ExceptionMatches(PyExc_ImportError))
- goto bad;
- PyErr_Clear();
- }
- }
- level = 0;
- }
- #endif
- if (!module) {
- #if PY_MAJOR_VERSION < 3
- PyObject *py_level = PyInt_FromLong(level);
- if (!py_level)
- goto bad;
- module = PyObject_CallFunctionObjArgs(py_import,
- name, global_dict, empty_dict, list, py_level, (PyObject *)NULL);
- Py_DECREF(py_level);
- #else
- module = PyImport_ImportModuleLevelObject(
- name, global_dict, empty_dict, list, level);
- #endif
- }
- }
-bad:
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(py_import);
- #endif
- Py_XDECREF(empty_list);
- Py_XDECREF(empty_dict);
- return module;
-}
-
-/* ImportFrom */
-static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) {
- PyObject* value = __Pyx_PyObject_GetAttrStr(module, name);
- if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) {
- PyErr_Format(PyExc_ImportError,
- #if PY_MAJOR_VERSION < 3
- "cannot import name %.230s", PyString_AS_STRING(name));
- #else
- "cannot import name %S", name);
- #endif
- }
- return value;
-}
-
-/* GetTopmostException */
-#if CYTHON_USE_EXC_INFO_STACK
-static _PyErr_StackItem *
-__Pyx_PyErr_GetTopmostException(PyThreadState *tstate)
-{
- _PyErr_StackItem *exc_info = tstate->exc_info;
- while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) &&
- exc_info->previous_item != NULL)
- {
- exc_info = exc_info->previous_item;
- }
- return exc_info;
-}
-#endif
-
-/* SaveResetException */
-#if CYTHON_FAST_THREAD_STATE
-static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {
- #if CYTHON_USE_EXC_INFO_STACK
- _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate);
- *type = exc_info->exc_type;
- *value = exc_info->exc_value;
- *tb = exc_info->exc_traceback;
- #else
- *type = tstate->exc_type;
- *value = tstate->exc_value;
- *tb = tstate->exc_traceback;
- #endif
- Py_XINCREF(*type);
- Py_XINCREF(*value);
- Py_XINCREF(*tb);
-}
-static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {
- PyObject *tmp_type, *tmp_value, *tmp_tb;
- #if CYTHON_USE_EXC_INFO_STACK
- _PyErr_StackItem *exc_info = tstate->exc_info;
- tmp_type = exc_info->exc_type;
- tmp_value = exc_info->exc_value;
- tmp_tb = exc_info->exc_traceback;
- exc_info->exc_type = type;
- exc_info->exc_value = value;
- exc_info->exc_traceback = tb;
- #else
- tmp_type = tstate->exc_type;
- tmp_value = tstate->exc_value;
- tmp_tb = tstate->exc_traceback;
- tstate->exc_type = type;
- tstate->exc_value = value;
- tstate->exc_traceback = tb;
- #endif
- Py_XDECREF(tmp_type);
- Py_XDECREF(tmp_value);
- Py_XDECREF(tmp_tb);
-}
-#endif
-
-/* PyErrExceptionMatches */
-#if CYTHON_FAST_THREAD_STATE
-static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {
- Py_ssize_t i, n;
- n = PyTuple_GET_SIZE(tuple);
-#if PY_MAJOR_VERSION >= 3
- for (i=0; icurexc_type;
- if (exc_type == err) return 1;
- if (unlikely(!exc_type)) return 0;
- if (unlikely(PyTuple_Check(err)))
- return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err);
- return __Pyx_PyErr_GivenExceptionMatches(exc_type, err);
-}
-#endif
-
-/* GetException */
-#if CYTHON_FAST_THREAD_STATE
-static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb)
-#else
-static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb)
-#endif
-{
- PyObject *local_type, *local_value, *local_tb;
-#if CYTHON_FAST_THREAD_STATE
- PyObject *tmp_type, *tmp_value, *tmp_tb;
- local_type = tstate->curexc_type;
- local_value = tstate->curexc_value;
- local_tb = tstate->curexc_traceback;
- tstate->curexc_type = 0;
- tstate->curexc_value = 0;
- tstate->curexc_traceback = 0;
-#else
- PyErr_Fetch(&local_type, &local_value, &local_tb);
-#endif
- PyErr_NormalizeException(&local_type, &local_value, &local_tb);
-#if CYTHON_FAST_THREAD_STATE
- if (unlikely(tstate->curexc_type))
-#else
- if (unlikely(PyErr_Occurred()))
-#endif
- goto bad;
- #if PY_MAJOR_VERSION >= 3
- if (local_tb) {
- if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0))
- goto bad;
- }
- #endif
- Py_XINCREF(local_tb);
- Py_XINCREF(local_type);
- Py_XINCREF(local_value);
- *type = local_type;
- *value = local_value;
- *tb = local_tb;
-#if CYTHON_FAST_THREAD_STATE
- #if CYTHON_USE_EXC_INFO_STACK
- {
- _PyErr_StackItem *exc_info = tstate->exc_info;
- tmp_type = exc_info->exc_type;
- tmp_value = exc_info->exc_value;
- tmp_tb = exc_info->exc_traceback;
- exc_info->exc_type = local_type;
- exc_info->exc_value = local_value;
- exc_info->exc_traceback = local_tb;
- }
- #else
- tmp_type = tstate->exc_type;
- tmp_value = tstate->exc_value;
- tmp_tb = tstate->exc_traceback;
- tstate->exc_type = local_type;
- tstate->exc_value = local_value;
- tstate->exc_traceback = local_tb;
- #endif
- Py_XDECREF(tmp_type);
- Py_XDECREF(tmp_value);
- Py_XDECREF(tmp_tb);
-#else
- PyErr_SetExcInfo(local_type, local_value, local_tb);
-#endif
- return 0;
-bad:
- *type = 0;
- *value = 0;
- *tb = 0;
- Py_XDECREF(local_type);
- Py_XDECREF(local_value);
- Py_XDECREF(local_tb);
- return -1;
-}
-
-/* CalculateMetaclass */
-static PyObject *__Pyx_CalculateMetaclass(PyTypeObject *metaclass, PyObject *bases) {
- Py_ssize_t i, nbases = PyTuple_GET_SIZE(bases);
- for (i=0; i < nbases; i++) {
- PyTypeObject *tmptype;
- PyObject *tmp = PyTuple_GET_ITEM(bases, i);
- tmptype = Py_TYPE(tmp);
-#if PY_MAJOR_VERSION < 3
- if (tmptype == &PyClass_Type)
- continue;
-#endif
- if (!metaclass) {
- metaclass = tmptype;
- continue;
- }
- if (PyType_IsSubtype(metaclass, tmptype))
- continue;
- if (PyType_IsSubtype(tmptype, metaclass)) {
- metaclass = tmptype;
- continue;
- }
- PyErr_SetString(PyExc_TypeError,
- "metaclass conflict: "
- "the metaclass of a derived class "
- "must be a (non-strict) subclass "
- "of the metaclasses of all its bases");
- return NULL;
- }
- if (!metaclass) {
-#if PY_MAJOR_VERSION < 3
- metaclass = &PyClass_Type;
-#else
- metaclass = &PyType_Type;
-#endif
- }
- Py_INCREF((PyObject*) metaclass);
- return (PyObject*) metaclass;
-}
-
-/* FetchCommonType */
-static PyTypeObject* __Pyx_FetchCommonType(PyTypeObject* type) {
- PyObject* fake_module;
- PyTypeObject* cached_type = NULL;
- fake_module = PyImport_AddModule((char*) "_cython_" CYTHON_ABI);
- if (!fake_module) return NULL;
- Py_INCREF(fake_module);
- cached_type = (PyTypeObject*) PyObject_GetAttrString(fake_module, type->tp_name);
- if (cached_type) {
- if (!PyType_Check((PyObject*)cached_type)) {
- PyErr_Format(PyExc_TypeError,
- "Shared Cython type %.200s is not a type object",
- type->tp_name);
- goto bad;
- }
- if (cached_type->tp_basicsize != type->tp_basicsize) {
- PyErr_Format(PyExc_TypeError,
- "Shared Cython type %.200s has the wrong size, try recompiling",
- type->tp_name);
- goto bad;
- }
- } else {
- if (!PyErr_ExceptionMatches(PyExc_AttributeError)) goto bad;
- PyErr_Clear();
- if (PyType_Ready(type) < 0) goto bad;
- if (PyObject_SetAttrString(fake_module, type->tp_name, (PyObject*) type) < 0)
- goto bad;
- Py_INCREF(type);
- cached_type = type;
- }
-done:
- Py_DECREF(fake_module);
- return cached_type;
-bad:
- Py_XDECREF(cached_type);
- cached_type = NULL;
- goto done;
-}
-
-/* CythonFunctionShared */
-#include
-static PyObject *
-__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *closure)
-{
- if (unlikely(op->func_doc == NULL)) {
- if (op->func.m_ml->ml_doc) {
-#if PY_MAJOR_VERSION >= 3
- op->func_doc = PyUnicode_FromString(op->func.m_ml->ml_doc);
-#else
- op->func_doc = PyString_FromString(op->func.m_ml->ml_doc);
-#endif
- if (unlikely(op->func_doc == NULL))
- return NULL;
- } else {
- Py_INCREF(Py_None);
- return Py_None;
- }
- }
- Py_INCREF(op->func_doc);
- return op->func_doc;
-}
-static int
-__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)
-{
- PyObject *tmp = op->func_doc;
- if (value == NULL) {
- value = Py_None;
- }
- Py_INCREF(value);
- op->func_doc = value;
- Py_XDECREF(tmp);
- return 0;
-}
-static PyObject *
-__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
-{
- if (unlikely(op->func_name == NULL)) {
-#if PY_MAJOR_VERSION >= 3
- op->func_name = PyUnicode_InternFromString(op->func.m_ml->ml_name);
-#else
- op->func_name = PyString_InternFromString(op->func.m_ml->ml_name);
-#endif
- if (unlikely(op->func_name == NULL))
- return NULL;
- }
- Py_INCREF(op->func_name);
- return op->func_name;
-}
-static int
-__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)
-{
- PyObject *tmp;
-#if PY_MAJOR_VERSION >= 3
- if (unlikely(value == NULL || !PyUnicode_Check(value)))
-#else
- if (unlikely(value == NULL || !PyString_Check(value)))
-#endif
- {
- PyErr_SetString(PyExc_TypeError,
- "__name__ must be set to a string object");
- return -1;
- }
- tmp = op->func_name;
- Py_INCREF(value);
- op->func_name = value;
- Py_XDECREF(tmp);
- return 0;
-}
-static PyObject *
-__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
-{
- Py_INCREF(op->func_qualname);
- return op->func_qualname;
-}
-static int
-__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)
-{
- PyObject *tmp;
-#if PY_MAJOR_VERSION >= 3
- if (unlikely(value == NULL || !PyUnicode_Check(value)))
-#else
- if (unlikely(value == NULL || !PyString_Check(value)))
-#endif
- {
- PyErr_SetString(PyExc_TypeError,
- "__qualname__ must be set to a string object");
- return -1;
- }
- tmp = op->func_qualname;
- Py_INCREF(value);
- op->func_qualname = value;
- Py_XDECREF(tmp);
- return 0;
-}
-static PyObject *
-__Pyx_CyFunction_get_self(__pyx_CyFunctionObject *m, CYTHON_UNUSED void *closure)
-{
- PyObject *self;
- self = m->func_closure;
- if (self == NULL)
- self = Py_None;
- Py_INCREF(self);
- return self;
-}
-static PyObject *
-__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
-{
- if (unlikely(op->func_dict == NULL)) {
- op->func_dict = PyDict_New();
- if (unlikely(op->func_dict == NULL))
- return NULL;
- }
- Py_INCREF(op->func_dict);
- return op->func_dict;
-}
-static int
-__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, CYTHON_UNUSED void *context)
-{
- PyObject *tmp;
- if (unlikely(value == NULL)) {
- PyErr_SetString(PyExc_TypeError,
- "function's dictionary may not be deleted");
- return -1;
- }
- if (unlikely(!PyDict_Check(value))) {
- PyErr_SetString(PyExc_TypeError,
- "setting function's dictionary to a non-dict");
- return -1;
- }
- tmp = op->func_dict;
- Py_INCREF(value);
- op->func_dict = value;
- Py_XDECREF(tmp);
- return 0;
-}
-static PyObject *
-__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
-{
- Py_INCREF(op->func_globals);
- return op->func_globals;
-}
-static PyObject *
-__Pyx_CyFunction_get_closure(CYTHON_UNUSED __pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
-{
- Py_INCREF(Py_None);
- return Py_None;
-}
-static PyObject *
-__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context)
-{
- PyObject* result = (op->func_code) ? op->func_code : Py_None;
- Py_INCREF(result);
- return result;
-}
-static int
-__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) {
- int result = 0;
- PyObject *res = op->defaults_getter((PyObject *) op);
- if (unlikely(!res))
- return -1;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- op->defaults_tuple = PyTuple_GET_ITEM(res, 0);
- Py_INCREF(op->defaults_tuple);
- op->defaults_kwdict = PyTuple_GET_ITEM(res, 1);
- Py_INCREF(op->defaults_kwdict);
- #else
- op->defaults_tuple = PySequence_ITEM(res, 0);
- if (unlikely(!op->defaults_tuple)) result = -1;
- else {
- op->defaults_kwdict = PySequence_ITEM(res, 1);
- if (unlikely(!op->defaults_kwdict)) result = -1;
- }
- #endif
- Py_DECREF(res);
- return result;
-}
-static int
-__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {
- PyObject* tmp;
- if (!value) {
- value = Py_None;
- } else if (value != Py_None && !PyTuple_Check(value)) {
- PyErr_SetString(PyExc_TypeError,
- "__defaults__ must be set to a tuple object");
- return -1;
- }
- Py_INCREF(value);
- tmp = op->defaults_tuple;
- op->defaults_tuple = value;
- Py_XDECREF(tmp);
- return 0;
-}
-static PyObject *
-__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {
- PyObject* result = op->defaults_tuple;
- if (unlikely(!result)) {
- if (op->defaults_getter) {
- if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL;
- result = op->defaults_tuple;
- } else {
- result = Py_None;
- }
- }
- Py_INCREF(result);
- return result;
-}
-static int
-__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {
- PyObject* tmp;
- if (!value) {
- value = Py_None;
- } else if (value != Py_None && !PyDict_Check(value)) {
- PyErr_SetString(PyExc_TypeError,
- "__kwdefaults__ must be set to a dict object");
- return -1;
- }
- Py_INCREF(value);
- tmp = op->defaults_kwdict;
- op->defaults_kwdict = value;
- Py_XDECREF(tmp);
- return 0;
-}
-static PyObject *
-__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {
- PyObject* result = op->defaults_kwdict;
- if (unlikely(!result)) {
- if (op->defaults_getter) {
- if (__Pyx_CyFunction_init_defaults(op) < 0) return NULL;
- result = op->defaults_kwdict;
- } else {
- result = Py_None;
- }
- }
- Py_INCREF(result);
- return result;
-}
-static int
-__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, CYTHON_UNUSED void *context) {
- PyObject* tmp;
- if (!value || value == Py_None) {
- value = NULL;
- } else if (!PyDict_Check(value)) {
- PyErr_SetString(PyExc_TypeError,
- "__annotations__ must be set to a dict object");
- return -1;
- }
- Py_XINCREF(value);
- tmp = op->func_annotations;
- op->func_annotations = value;
- Py_XDECREF(tmp);
- return 0;
-}
-static PyObject *
-__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, CYTHON_UNUSED void *context) {
- PyObject* result = op->func_annotations;
- if (unlikely(!result)) {
- result = PyDict_New();
- if (unlikely(!result)) return NULL;
- op->func_annotations = result;
- }
- Py_INCREF(result);
- return result;
-}
-static PyGetSetDef __pyx_CyFunction_getsets[] = {
- {(char *) "func_doc", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0},
- {(char *) "__doc__", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0},
- {(char *) "func_name", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0},
- {(char *) "__name__", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0},
- {(char *) "__qualname__", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0},
- {(char *) "__self__", (getter)__Pyx_CyFunction_get_self, 0, 0, 0},
- {(char *) "func_dict", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0},
- {(char *) "__dict__", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0},
- {(char *) "func_globals", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0},
- {(char *) "__globals__", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0},
- {(char *) "func_closure", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0},
- {(char *) "__closure__", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0},
- {(char *) "func_code", (getter)__Pyx_CyFunction_get_code, 0, 0, 0},
- {(char *) "__code__", (getter)__Pyx_CyFunction_get_code, 0, 0, 0},
- {(char *) "func_defaults", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0},
- {(char *) "__defaults__", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0},
- {(char *) "__kwdefaults__", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0},
- {(char *) "__annotations__", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0},
- {0, 0, 0, 0, 0}
-};
-static PyMemberDef __pyx_CyFunction_members[] = {
- {(char *) "__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), PY_WRITE_RESTRICTED, 0},
- {0, 0, 0, 0, 0}
-};
-static PyObject *
-__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, CYTHON_UNUSED PyObject *args)
-{
-#if PY_MAJOR_VERSION >= 3
- Py_INCREF(m->func_qualname);
- return m->func_qualname;
-#else
- return PyString_FromString(m->func.m_ml->ml_name);
-#endif
-}
-static PyMethodDef __pyx_CyFunction_methods[] = {
- {"__reduce__", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0},
- {0, 0, 0, 0}
-};
-#if PY_VERSION_HEX < 0x030500A0
-#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist)
-#else
-#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func.m_weakreflist)
-#endif
-static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject *op, PyMethodDef *ml, int flags, PyObject* qualname,
- PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) {
- if (unlikely(op == NULL))
- return NULL;
- op->flags = flags;
- __Pyx_CyFunction_weakreflist(op) = NULL;
- op->func.m_ml = ml;
- op->func.m_self = (PyObject *) op;
- Py_XINCREF(closure);
- op->func_closure = closure;
- Py_XINCREF(module);
- op->func.m_module = module;
- op->func_dict = NULL;
- op->func_name = NULL;
- Py_INCREF(qualname);
- op->func_qualname = qualname;
- op->func_doc = NULL;
- op->func_classobj = NULL;
- op->func_globals = globals;
- Py_INCREF(op->func_globals);
- Py_XINCREF(code);
- op->func_code = code;
- op->defaults_pyobjects = 0;
- op->defaults_size = 0;
- op->defaults = NULL;
- op->defaults_tuple = NULL;
- op->defaults_kwdict = NULL;
- op->defaults_getter = NULL;
- op->func_annotations = NULL;
- return (PyObject *) op;
-}
-static int
-__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m)
-{
- Py_CLEAR(m->func_closure);
- Py_CLEAR(m->func.m_module);
- Py_CLEAR(m->func_dict);
- Py_CLEAR(m->func_name);
- Py_CLEAR(m->func_qualname);
- Py_CLEAR(m->func_doc);
- Py_CLEAR(m->func_globals);
- Py_CLEAR(m->func_code);
- Py_CLEAR(m->func_classobj);
- Py_CLEAR(m->defaults_tuple);
- Py_CLEAR(m->defaults_kwdict);
- Py_CLEAR(m->func_annotations);
- if (m->defaults) {
- PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m);
- int i;
- for (i = 0; i < m->defaults_pyobjects; i++)
- Py_XDECREF(pydefaults[i]);
- PyObject_Free(m->defaults);
- m->defaults = NULL;
- }
- return 0;
-}
-static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m)
-{
- if (__Pyx_CyFunction_weakreflist(m) != NULL)
- PyObject_ClearWeakRefs((PyObject *) m);
- __Pyx_CyFunction_clear(m);
- PyObject_GC_Del(m);
-}
-static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m)
-{
- PyObject_GC_UnTrack(m);
- __Pyx__CyFunction_dealloc(m);
-}
-static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg)
-{
- Py_VISIT(m->func_closure);
- Py_VISIT(m->func.m_module);
- Py_VISIT(m->func_dict);
- Py_VISIT(m->func_name);
- Py_VISIT(m->func_qualname);
- Py_VISIT(m->func_doc);
- Py_VISIT(m->func_globals);
- Py_VISIT(m->func_code);
- Py_VISIT(m->func_classobj);
- Py_VISIT(m->defaults_tuple);
- Py_VISIT(m->defaults_kwdict);
- if (m->defaults) {
- PyObject **pydefaults = __Pyx_CyFunction_Defaults(PyObject *, m);
- int i;
- for (i = 0; i < m->defaults_pyobjects; i++)
- Py_VISIT(pydefaults[i]);
- }
- return 0;
-}
-static PyObject *__Pyx_CyFunction_descr_get(PyObject *func, PyObject *obj, PyObject *type)
-{
-#if PY_MAJOR_VERSION < 3
- __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;
- if (m->flags & __Pyx_CYFUNCTION_STATICMETHOD) {
- Py_INCREF(func);
- return func;
- }
- if (m->flags & __Pyx_CYFUNCTION_CLASSMETHOD) {
- if (type == NULL)
- type = (PyObject *)(Py_TYPE(obj));
- return __Pyx_PyMethod_New(func, type, (PyObject *)(Py_TYPE(type)));
- }
- if (obj == Py_None)
- obj = NULL;
-#endif
- return __Pyx_PyMethod_New(func, obj, type);
-}
-static PyObject*
-__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op)
-{
-#if PY_MAJOR_VERSION >= 3
- return PyUnicode_FromFormat("",
- op->func_qualname, (void *)op);
-#else
- return PyString_FromFormat("",
- PyString_AsString(op->func_qualname), (void *)op);
-#endif
-}
-static PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) {
- PyCFunctionObject* f = (PyCFunctionObject*)func;
- PyCFunction meth = f->m_ml->ml_meth;
- Py_ssize_t size;
- switch (f->m_ml->ml_flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) {
- case METH_VARARGS:
- if (likely(kw == NULL || PyDict_Size(kw) == 0))
- return (*meth)(self, arg);
- break;
- case METH_VARARGS | METH_KEYWORDS:
- return (*(PyCFunctionWithKeywords)(void*)meth)(self, arg, kw);
- case METH_NOARGS:
- if (likely(kw == NULL || PyDict_Size(kw) == 0)) {
- size = PyTuple_GET_SIZE(arg);
- if (likely(size == 0))
- return (*meth)(self, NULL);
- PyErr_Format(PyExc_TypeError,
- "%.200s() takes no arguments (%" CYTHON_FORMAT_SSIZE_T "d given)",
- f->m_ml->ml_name, size);
- return NULL;
- }
- break;
- case METH_O:
- if (likely(kw == NULL || PyDict_Size(kw) == 0)) {
- size = PyTuple_GET_SIZE(arg);
- if (likely(size == 1)) {
- PyObject *result, *arg0;
- #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
- arg0 = PyTuple_GET_ITEM(arg, 0);
- #else
- arg0 = PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL;
- #endif
- result = (*meth)(self, arg0);
- #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS)
- Py_DECREF(arg0);
- #endif
- return result;
- }
- PyErr_Format(PyExc_TypeError,
- "%.200s() takes exactly one argument (%" CYTHON_FORMAT_SSIZE_T "d given)",
- f->m_ml->ml_name, size);
- return NULL;
- }
- break;
- default:
- PyErr_SetString(PyExc_SystemError, "Bad call flags in "
- "__Pyx_CyFunction_Call. METH_OLDARGS is no "
- "longer supported!");
- return NULL;
- }
- PyErr_Format(PyExc_TypeError, "%.200s() takes no keyword arguments",
- f->m_ml->ml_name);
- return NULL;
-}
-static CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) {
- return __Pyx_CyFunction_CallMethod(func, ((PyCFunctionObject*)func)->m_self, arg, kw);
-}
-static PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) {
- PyObject *result;
- __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func;
- if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) {
- Py_ssize_t argc;
- PyObject *new_args;
- PyObject *self;
- argc = PyTuple_GET_SIZE(args);
- new_args = PyTuple_GetSlice(args, 1, argc);
- if (unlikely(!new_args))
- return NULL;
- self = PyTuple_GetItem(args, 0);
- if (unlikely(!self)) {
- Py_DECREF(new_args);
-#if PY_MAJOR_VERSION > 2
- PyErr_Format(PyExc_TypeError,
- "unbound method %.200S() needs an argument",
- cyfunc->func_qualname);
-#else
- PyErr_SetString(PyExc_TypeError,
- "unbound method needs an argument");
-#endif
- return NULL;
- }
- result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw);
- Py_DECREF(new_args);
- } else {
- result = __Pyx_CyFunction_Call(func, args, kw);
- }
- return result;
-}
-static PyTypeObject __pyx_CyFunctionType_type = {
- PyVarObject_HEAD_INIT(0, 0)
- "cython_function_or_method",
- sizeof(__pyx_CyFunctionObject),
- 0,
- (destructor) __Pyx_CyFunction_dealloc,
- 0,
- 0,
- 0,
-#if PY_MAJOR_VERSION < 3
- 0,
-#else
- 0,
-#endif
- (reprfunc) __Pyx_CyFunction_repr,
- 0,
- 0,
- 0,
- 0,
- __Pyx_CyFunction_CallAsMethod,
- 0,
- 0,
- 0,
- 0,
- Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,
- 0,
- (traverseproc) __Pyx_CyFunction_traverse,
- (inquiry) __Pyx_CyFunction_clear,
- 0,
-#if PY_VERSION_HEX < 0x030500A0
- offsetof(__pyx_CyFunctionObject, func_weakreflist),
-#else
- offsetof(PyCFunctionObject, m_weakreflist),
-#endif
- 0,
- 0,
- __pyx_CyFunction_methods,
- __pyx_CyFunction_members,
- __pyx_CyFunction_getsets,
- 0,
- 0,
- __Pyx_CyFunction_descr_get,
- 0,
- offsetof(__pyx_CyFunctionObject, func_dict),
- 0,
- 0,
- 0,
- 0,
- 0,
- 0,
- 0,
- 0,
- 0,
- 0,
- 0,
- 0,
-#if PY_VERSION_HEX >= 0x030400a1
- 0,
-#endif
-#if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800)
- 0,
-#endif
-#if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000
- 0,
-#endif
-#if PY_VERSION_HEX >= 0x030C0000
- 0,
-#endif
-#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000
- 0,
-#endif
-};
-static int __pyx_CyFunction_init(void) {
- __pyx_CyFunctionType = __Pyx_FetchCommonType(&__pyx_CyFunctionType_type);
- if (unlikely(__pyx_CyFunctionType == NULL)) {
- return -1;
- }
- return 0;
-}
-static CYTHON_INLINE void *__Pyx_CyFunction_InitDefaults(PyObject *func, size_t size, int pyobjects) {
- __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;
- m->defaults = PyObject_Malloc(size);
- if (unlikely(!m->defaults))
- return PyErr_NoMemory();
- memset(m->defaults, 0, size);
- m->defaults_pyobjects = pyobjects;
- m->defaults_size = size;
- return m->defaults;
-}
-static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) {
- __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;
- m->defaults_tuple = tuple;
- Py_INCREF(tuple);
-}
-static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) {
- __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;
- m->defaults_kwdict = dict;
- Py_INCREF(dict);
-}
-static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) {
- __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func;
- m->func_annotations = dict;
- Py_INCREF(dict);
-}
-
-/* CythonFunction */
-static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, int flags, PyObject* qualname,
- PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) {
- PyObject *op = __Pyx_CyFunction_Init(
- PyObject_GC_New(__pyx_CyFunctionObject, __pyx_CyFunctionType),
- ml, flags, qualname, closure, module, globals, code
- );
- if (likely(op)) {
- PyObject_GC_Track(op);
- }
- return op;
-}
-
-/* Py3ClassCreate */
-static PyObject *__Pyx_Py3MetaclassPrepare(PyObject *metaclass, PyObject *bases, PyObject *name,
- PyObject *qualname, PyObject *mkw, PyObject *modname, PyObject *doc) {
- PyObject *ns;
- if (metaclass) {
- PyObject *prep = __Pyx_PyObject_GetAttrStr(metaclass, __pyx_n_s_prepare);
- if (prep) {
- PyObject *pargs = PyTuple_Pack(2, name, bases);
- if (unlikely(!pargs)) {
- Py_DECREF(prep);
- return NULL;
- }
- ns = PyObject_Call(prep, pargs, mkw);
- Py_DECREF(prep);
- Py_DECREF(pargs);
- } else {
- if (unlikely(!PyErr_ExceptionMatches(PyExc_AttributeError)))
- return NULL;
- PyErr_Clear();
- ns = PyDict_New();
- }
- } else {
- ns = PyDict_New();
- }
- if (unlikely(!ns))
- return NULL;
- if (unlikely(PyObject_SetItem(ns, __pyx_n_s_module, modname) < 0)) goto bad;
- if (unlikely(PyObject_SetItem(ns, __pyx_n_s_qualname, qualname) < 0)) goto bad;
- if (unlikely(doc && PyObject_SetItem(ns, __pyx_n_s_doc, doc) < 0)) goto bad;
- return ns;
-bad:
- Py_DECREF(ns);
- return NULL;
-}
-static PyObject *__Pyx_Py3ClassCreate(PyObject *metaclass, PyObject *name, PyObject *bases,
- PyObject *dict, PyObject *mkw,
- int calculate_metaclass, int allow_py2_metaclass) {
- PyObject *result, *margs;
- PyObject *owned_metaclass = NULL;
- if (allow_py2_metaclass) {
- owned_metaclass = PyObject_GetItem(dict, __pyx_n_s_metaclass);
- if (owned_metaclass) {
- metaclass = owned_metaclass;
- } else if (likely(PyErr_ExceptionMatches(PyExc_KeyError))) {
- PyErr_Clear();
- } else {
- return NULL;
- }
- }
- if (calculate_metaclass && (!metaclass || PyType_Check(metaclass))) {
- metaclass = __Pyx_CalculateMetaclass((PyTypeObject*) metaclass, bases);
- Py_XDECREF(owned_metaclass);
- if (unlikely(!metaclass))
- return NULL;
- owned_metaclass = metaclass;
- }
- margs = PyTuple_Pack(3, name, bases, dict);
- if (unlikely(!margs)) {
- result = NULL;
- } else {
- result = PyObject_Call(metaclass, margs, mkw);
- Py_DECREF(margs);
- }
- Py_XDECREF(owned_metaclass);
- return result;
-}
-
-/* BytesEquals */
-static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) {
-#if CYTHON_COMPILING_IN_PYPY
- return PyObject_RichCompareBool(s1, s2, equals);
-#else
- if (s1 == s2) {
- return (equals == Py_EQ);
- } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) {
- const char *ps1, *ps2;
- Py_ssize_t length = PyBytes_GET_SIZE(s1);
- if (length != PyBytes_GET_SIZE(s2))
- return (equals == Py_NE);
- ps1 = PyBytes_AS_STRING(s1);
- ps2 = PyBytes_AS_STRING(s2);
- if (ps1[0] != ps2[0]) {
- return (equals == Py_NE);
- } else if (length == 1) {
- return (equals == Py_EQ);
- } else {
- int result;
-#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000)
- Py_hash_t hash1, hash2;
- hash1 = ((PyBytesObject*)s1)->ob_shash;
- hash2 = ((PyBytesObject*)s2)->ob_shash;
- if (hash1 != hash2 && hash1 != -1 && hash2 != -1) {
- return (equals == Py_NE);
- }
-#endif
- result = memcmp(ps1, ps2, (size_t)length);
- return (equals == Py_EQ) ? (result == 0) : (result != 0);
- }
- } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) {
- return (equals == Py_NE);
- } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) {
- return (equals == Py_NE);
- } else {
- int result;
- PyObject* py_result = PyObject_RichCompare(s1, s2, equals);
- if (!py_result)
- return -1;
- result = __Pyx_PyObject_IsTrue(py_result);
- Py_DECREF(py_result);
- return result;
- }
-#endif
-}
-
-/* UnicodeEquals */
-static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) {
-#if CYTHON_COMPILING_IN_PYPY
- return PyObject_RichCompareBool(s1, s2, equals);
-#else
-#if PY_MAJOR_VERSION < 3
- PyObject* owned_ref = NULL;
-#endif
- int s1_is_unicode, s2_is_unicode;
- if (s1 == s2) {
- goto return_eq;
- }
- s1_is_unicode = PyUnicode_CheckExact(s1);
- s2_is_unicode = PyUnicode_CheckExact(s2);
-#if PY_MAJOR_VERSION < 3
- if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) {
- owned_ref = PyUnicode_FromObject(s2);
- if (unlikely(!owned_ref))
- return -1;
- s2 = owned_ref;
- s2_is_unicode = 1;
- } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) {
- owned_ref = PyUnicode_FromObject(s1);
- if (unlikely(!owned_ref))
- return -1;
- s1 = owned_ref;
- s1_is_unicode = 1;
- } else if (((!s2_is_unicode) & (!s1_is_unicode))) {
- return __Pyx_PyBytes_Equals(s1, s2, equals);
- }
-#endif
- if (s1_is_unicode & s2_is_unicode) {
- Py_ssize_t length;
- int kind;
- void *data1, *data2;
- if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0))
- return -1;
- length = __Pyx_PyUnicode_GET_LENGTH(s1);
- if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) {
- goto return_ne;
- }
-#if CYTHON_USE_UNICODE_INTERNALS
- {
- Py_hash_t hash1, hash2;
- #if CYTHON_PEP393_ENABLED
- hash1 = ((PyASCIIObject*)s1)->hash;
- hash2 = ((PyASCIIObject*)s2)->hash;
- #else
- hash1 = ((PyUnicodeObject*)s1)->hash;
- hash2 = ((PyUnicodeObject*)s2)->hash;
- #endif
- if (hash1 != hash2 && hash1 != -1 && hash2 != -1) {
- goto return_ne;
- }
- }
-#endif
- kind = __Pyx_PyUnicode_KIND(s1);
- if (kind != __Pyx_PyUnicode_KIND(s2)) {
- goto return_ne;
- }
- data1 = __Pyx_PyUnicode_DATA(s1);
- data2 = __Pyx_PyUnicode_DATA(s2);
- if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) {
- goto return_ne;
- } else if (length == 1) {
- goto return_eq;
- } else {
- int result = memcmp(data1, data2, (size_t)(length * kind));
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- return (equals == Py_EQ) ? (result == 0) : (result != 0);
- }
- } else if ((s1 == Py_None) & s2_is_unicode) {
- goto return_ne;
- } else if ((s2 == Py_None) & s1_is_unicode) {
- goto return_ne;
- } else {
- int result;
- PyObject* py_result = PyObject_RichCompare(s1, s2, equals);
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- if (!py_result)
- return -1;
- result = __Pyx_PyObject_IsTrue(py_result);
- Py_DECREF(py_result);
- return result;
- }
-return_eq:
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- return (equals == Py_EQ);
-return_ne:
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(owned_ref);
- #endif
- return (equals == Py_NE);
-#endif
-}
-
-/* CLineInTraceback */
-#ifndef CYTHON_CLINE_IN_TRACEBACK
-static int __Pyx_CLineForTraceback(CYTHON_UNUSED PyThreadState *tstate, int c_line) {
- PyObject *use_cline;
- PyObject *ptype, *pvalue, *ptraceback;
-#if CYTHON_COMPILING_IN_CPYTHON
- PyObject **cython_runtime_dict;
-#endif
- if (unlikely(!__pyx_cython_runtime)) {
- return c_line;
- }
- __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback);
-#if CYTHON_COMPILING_IN_CPYTHON
- cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime);
- if (likely(cython_runtime_dict)) {
- __PYX_PY_DICT_LOOKUP_IF_MODIFIED(
- use_cline, *cython_runtime_dict,
- __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback))
- } else
-#endif
- {
- PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback);
- if (use_cline_obj) {
- use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True;
- Py_DECREF(use_cline_obj);
- } else {
- PyErr_Clear();
- use_cline = NULL;
- }
- }
- if (!use_cline) {
- c_line = 0;
- (void) PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False);
- }
- else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) {
- c_line = 0;
- }
- __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback);
- return c_line;
-}
-#endif
-
-/* CodeObjectCache */
-static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) {
- int start = 0, mid = 0, end = count - 1;
- if (end >= 0 && code_line > entries[end].code_line) {
- return count;
- }
- while (start < end) {
- mid = start + (end - start) / 2;
- if (code_line < entries[mid].code_line) {
- end = mid;
- } else if (code_line > entries[mid].code_line) {
- start = mid + 1;
- } else {
- return mid;
- }
- }
- if (code_line <= entries[mid].code_line) {
- return mid;
- } else {
- return mid + 1;
- }
-}
-static PyCodeObject *__pyx_find_code_object(int code_line) {
- PyCodeObject* code_object;
- int pos;
- if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) {
- return NULL;
- }
- pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);
- if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) {
- return NULL;
- }
- code_object = __pyx_code_cache.entries[pos].code_object;
- Py_INCREF(code_object);
- return code_object;
-}
-static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) {
- int pos, i;
- __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries;
- if (unlikely(!code_line)) {
- return;
- }
- if (unlikely(!entries)) {
- entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry));
- if (likely(entries)) {
- __pyx_code_cache.entries = entries;
- __pyx_code_cache.max_count = 64;
- __pyx_code_cache.count = 1;
- entries[0].code_line = code_line;
- entries[0].code_object = code_object;
- Py_INCREF(code_object);
- }
- return;
- }
- pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line);
- if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) {
- PyCodeObject* tmp = entries[pos].code_object;
- entries[pos].code_object = code_object;
- Py_DECREF(tmp);
- return;
- }
- if (__pyx_code_cache.count == __pyx_code_cache.max_count) {
- int new_max = __pyx_code_cache.max_count + 64;
- entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc(
- __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry));
- if (unlikely(!entries)) {
- return;
- }
- __pyx_code_cache.entries = entries;
- __pyx_code_cache.max_count = new_max;
- }
- for (i=__pyx_code_cache.count; i>pos; i--) {
- entries[i] = entries[i-1];
- }
- entries[pos].code_line = code_line;
- entries[pos].code_object = code_object;
- __pyx_code_cache.count++;
- Py_INCREF(code_object);
-}
-
-/* AddTraceback */
-#include "compile.h"
-#include "frameobject.h"
-#include "traceback.h"
-#if PY_VERSION_HEX >= 0x030b00a6
- #ifndef Py_BUILD_CORE
- #define Py_BUILD_CORE 1
- #endif
- #include "internal/pycore_frame.h"
-#endif
-static PyCodeObject* __Pyx_CreateCodeObjectForTraceback(
- const char *funcname, int c_line,
- int py_line, const char *filename) {
- PyCodeObject *py_code = NULL;
- PyObject *py_funcname = NULL;
- #if PY_MAJOR_VERSION < 3
- PyObject *py_srcfile = NULL;
- py_srcfile = PyString_FromString(filename);
- if (!py_srcfile) goto bad;
- #endif
- if (c_line) {
- #if PY_MAJOR_VERSION < 3
- py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line);
- if (!py_funcname) goto bad;
- #else
- py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line);
- if (!py_funcname) goto bad;
- funcname = PyUnicode_AsUTF8(py_funcname);
- if (!funcname) goto bad;
- #endif
- }
- else {
- #if PY_MAJOR_VERSION < 3
- py_funcname = PyString_FromString(funcname);
- if (!py_funcname) goto bad;
- #endif
- }
- #if PY_MAJOR_VERSION < 3
- py_code = __Pyx_PyCode_New(
- 0,
- 0,
- 0,
- 0,
- 0,
- __pyx_empty_bytes, /*PyObject *code,*/
- __pyx_empty_tuple, /*PyObject *consts,*/
- __pyx_empty_tuple, /*PyObject *names,*/
- __pyx_empty_tuple, /*PyObject *varnames,*/
- __pyx_empty_tuple, /*PyObject *freevars,*/
- __pyx_empty_tuple, /*PyObject *cellvars,*/
- py_srcfile, /*PyObject *filename,*/
- py_funcname, /*PyObject *name,*/
- py_line,
- __pyx_empty_bytes /*PyObject *lnotab*/
- );
- Py_DECREF(py_srcfile);
- #else
- py_code = PyCode_NewEmpty(filename, funcname, py_line);
- #endif
- Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline
- return py_code;
-bad:
- Py_XDECREF(py_funcname);
- #if PY_MAJOR_VERSION < 3
- Py_XDECREF(py_srcfile);
- #endif
- return NULL;
-}
-static void __Pyx_AddTraceback(const char *funcname, int c_line,
- int py_line, const char *filename) {
- PyCodeObject *py_code = 0;
- PyFrameObject *py_frame = 0;
- PyThreadState *tstate = __Pyx_PyThreadState_Current;
- PyObject *ptype, *pvalue, *ptraceback;
- if (c_line) {
- c_line = __Pyx_CLineForTraceback(tstate, c_line);
- }
- py_code = __pyx_find_code_object(c_line ? -c_line : py_line);
- if (!py_code) {
- __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback);
- py_code = __Pyx_CreateCodeObjectForTraceback(
- funcname, c_line, py_line, filename);
- if (!py_code) {
- /* If the code object creation fails, then we should clear the
- fetched exception references and propagate the new exception */
- Py_XDECREF(ptype);
- Py_XDECREF(pvalue);
- Py_XDECREF(ptraceback);
- goto bad;
- }
- __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback);
- __pyx_insert_code_object(c_line ? -c_line : py_line, py_code);
- }
- py_frame = PyFrame_New(
- tstate, /*PyThreadState *tstate,*/
- py_code, /*PyCodeObject *code,*/
- __pyx_d, /*PyObject *globals,*/
- 0 /*PyObject *locals*/
- );
- if (!py_frame) goto bad;
- __Pyx_PyFrame_SetLineNumber(py_frame, py_line);
- PyTraceBack_Here(py_frame);
-bad:
- Py_XDECREF(py_code);
- Py_XDECREF(py_frame);
-}
-
-/* CIntToPy */
-static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) {
-#ifdef __Pyx_HAS_GCC_DIAGNOSTIC
-#pragma GCC diagnostic push
-#pragma GCC diagnostic ignored "-Wconversion"
-#endif
- const long neg_one = (long) -1, const_zero = (long) 0;
-#ifdef __Pyx_HAS_GCC_DIAGNOSTIC
-#pragma GCC diagnostic pop
-#endif
- const int is_unsigned = neg_one > const_zero;
- if (is_unsigned) {
- if (sizeof(long) < sizeof(long)) {
- return PyInt_FromLong((long) value);
- } else if (sizeof(long) <= sizeof(unsigned long)) {
- return PyLong_FromUnsignedLong((unsigned long) value);
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {
- return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value);
-#endif
- }
- } else {
- if (sizeof(long) <= sizeof(long)) {
- return PyInt_FromLong((long) value);
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {
- return PyLong_FromLongLong((PY_LONG_LONG) value);
-#endif
- }
- }
- {
- int one = 1; int little = (int)*(unsigned char *)&one;
- unsigned char *bytes = (unsigned char *)&value;
- return _PyLong_FromByteArray(bytes, sizeof(long),
- little, !is_unsigned);
- }
-}
-
-/* CIntFromPyVerify */
-#define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\
- __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0)
-#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\
- __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1)
-#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\
- {\
- func_type value = func_value;\
- if (sizeof(target_type) < sizeof(func_type)) {\
- if (unlikely(value != (func_type) (target_type) value)) {\
- func_type zero = 0;\
- if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\
- return (target_type) -1;\
- if (is_unsigned && unlikely(value < zero))\
- goto raise_neg_overflow;\
- else\
- goto raise_overflow;\
- }\
- }\
- return (target_type) value;\
- }
-
-/* CIntFromPy */
-static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) {
-#ifdef __Pyx_HAS_GCC_DIAGNOSTIC
-#pragma GCC diagnostic push
-#pragma GCC diagnostic ignored "-Wconversion"
-#endif
- const long neg_one = (long) -1, const_zero = (long) 0;
-#ifdef __Pyx_HAS_GCC_DIAGNOSTIC
-#pragma GCC diagnostic pop
-#endif
- const int is_unsigned = neg_one > const_zero;
-#if PY_MAJOR_VERSION < 3
- if (likely(PyInt_Check(x))) {
- if (sizeof(long) < sizeof(long)) {
- __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x))
- } else {
- long val = PyInt_AS_LONG(x);
- if (is_unsigned && unlikely(val < 0)) {
- goto raise_neg_overflow;
- }
- return (long) val;
- }
- } else
-#endif
- if (likely(PyLong_Check(x))) {
- if (is_unsigned) {
-#if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)x)->ob_digit;
- switch (Py_SIZE(x)) {
- case 0: return (long) 0;
- case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0])
- case 2:
- if (8 * sizeof(long) > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) {
- return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));
- }
- }
- break;
- case 3:
- if (8 * sizeof(long) > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) {
- return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));
- }
- }
- break;
- case 4:
- if (8 * sizeof(long) > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) {
- return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]));
- }
- }
- break;
- }
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7
- if (unlikely(Py_SIZE(x) < 0)) {
- goto raise_neg_overflow;
- }
-#else
- {
- int result = PyObject_RichCompareBool(x, Py_False, Py_LT);
- if (unlikely(result < 0))
- return (long) -1;
- if (unlikely(result == 1))
- goto raise_neg_overflow;
- }
-#endif
- if (sizeof(long) <= sizeof(unsigned long)) {
- __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x))
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) {
- __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))
-#endif
- }
- } else {
-#if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)x)->ob_digit;
- switch (Py_SIZE(x)) {
- case 0: return (long) 0;
- case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0]))
- case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0])
- case -2:
- if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {
- return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- case 2:
- if (8 * sizeof(long) > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {
- return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- case -3:
- if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {
- return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- case 3:
- if (8 * sizeof(long) > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {
- return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- case -4:
- if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {
- return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- case 4:
- if (8 * sizeof(long) > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) {
- return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])));
- }
- }
- break;
- }
-#endif
- if (sizeof(long) <= sizeof(long)) {
- __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x))
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) {
- __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x))
-#endif
- }
- }
- {
-#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)
- PyErr_SetString(PyExc_RuntimeError,
- "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers");
-#else
- long val;
- PyObject *v = __Pyx_PyNumber_IntOrLong(x);
- #if PY_MAJOR_VERSION < 3
- if (likely(v) && !PyLong_Check(v)) {
- PyObject *tmp = v;
- v = PyNumber_Long(tmp);
- Py_DECREF(tmp);
- }
- #endif
- if (likely(v)) {
- int one = 1; int is_little = (int)*(unsigned char *)&one;
- unsigned char *bytes = (unsigned char *)&val;
- int ret = _PyLong_AsByteArray((PyLongObject *)v,
- bytes, sizeof(val),
- is_little, !is_unsigned);
- Py_DECREF(v);
- if (likely(!ret))
- return val;
- }
-#endif
- return (long) -1;
- }
- } else {
- long val;
- PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);
- if (!tmp) return (long) -1;
- val = __Pyx_PyInt_As_long(tmp);
- Py_DECREF(tmp);
- return val;
- }
-raise_overflow:
- PyErr_SetString(PyExc_OverflowError,
- "value too large to convert to long");
- return (long) -1;
-raise_neg_overflow:
- PyErr_SetString(PyExc_OverflowError,
- "can't convert negative value to long");
- return (long) -1;
-}
-
-/* CIntFromPy */
-static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) {
-#ifdef __Pyx_HAS_GCC_DIAGNOSTIC
-#pragma GCC diagnostic push
-#pragma GCC diagnostic ignored "-Wconversion"
-#endif
- const int neg_one = (int) -1, const_zero = (int) 0;
-#ifdef __Pyx_HAS_GCC_DIAGNOSTIC
-#pragma GCC diagnostic pop
-#endif
- const int is_unsigned = neg_one > const_zero;
-#if PY_MAJOR_VERSION < 3
- if (likely(PyInt_Check(x))) {
- if (sizeof(int) < sizeof(long)) {
- __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x))
- } else {
- long val = PyInt_AS_LONG(x);
- if (is_unsigned && unlikely(val < 0)) {
- goto raise_neg_overflow;
- }
- return (int) val;
- }
- } else
-#endif
- if (likely(PyLong_Check(x))) {
- if (is_unsigned) {
-#if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)x)->ob_digit;
- switch (Py_SIZE(x)) {
- case 0: return (int) 0;
- case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0])
- case 2:
- if (8 * sizeof(int) > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) {
- return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));
- }
- }
- break;
- case 3:
- if (8 * sizeof(int) > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) {
- return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));
- }
- }
- break;
- case 4:
- if (8 * sizeof(int) > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) {
- return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]));
- }
- }
- break;
- }
-#endif
-#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7
- if (unlikely(Py_SIZE(x) < 0)) {
- goto raise_neg_overflow;
- }
-#else
- {
- int result = PyObject_RichCompareBool(x, Py_False, Py_LT);
- if (unlikely(result < 0))
- return (int) -1;
- if (unlikely(result == 1))
- goto raise_neg_overflow;
- }
-#endif
- if (sizeof(int) <= sizeof(unsigned long)) {
- __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x))
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) {
- __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))
-#endif
- }
- } else {
-#if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)x)->ob_digit;
- switch (Py_SIZE(x)) {
- case 0: return (int) 0;
- case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0]))
- case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0])
- case -2:
- if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {
- return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- case 2:
- if (8 * sizeof(int) > 1 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {
- return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- case -3:
- if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {
- return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- case 3:
- if (8 * sizeof(int) > 2 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {
- return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- case -4:
- if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {
- return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- case 4:
- if (8 * sizeof(int) > 3 * PyLong_SHIFT) {
- if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
- __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
- } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) {
- return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])));
- }
- }
- break;
- }
-#endif
- if (sizeof(int) <= sizeof(long)) {
- __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x))
-#ifdef HAVE_LONG_LONG
- } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) {
- __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x))
-#endif
- }
- }
- {
-#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)
- PyErr_SetString(PyExc_RuntimeError,
- "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers");
-#else
- int val;
- PyObject *v = __Pyx_PyNumber_IntOrLong(x);
- #if PY_MAJOR_VERSION < 3
- if (likely(v) && !PyLong_Check(v)) {
- PyObject *tmp = v;
- v = PyNumber_Long(tmp);
- Py_DECREF(tmp);
- }
- #endif
- if (likely(v)) {
- int one = 1; int is_little = (int)*(unsigned char *)&one;
- unsigned char *bytes = (unsigned char *)&val;
- int ret = _PyLong_AsByteArray((PyLongObject *)v,
- bytes, sizeof(val),
- is_little, !is_unsigned);
- Py_DECREF(v);
- if (likely(!ret))
- return val;
- }
-#endif
- return (int) -1;
- }
- } else {
- int val;
- PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);
- if (!tmp) return (int) -1;
- val = __Pyx_PyInt_As_int(tmp);
- Py_DECREF(tmp);
- return val;
- }
-raise_overflow:
- PyErr_SetString(PyExc_OverflowError,
- "value too large to convert to int");
- return (int) -1;
-raise_neg_overflow:
- PyErr_SetString(PyExc_OverflowError,
- "can't convert negative value to int");
- return (int) -1;
-}
-
-/* FastTypeChecks */
-#if CYTHON_COMPILING_IN_CPYTHON
-static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) {
- while (a) {
- a = a->tp_base;
- if (a == b)
- return 1;
- }
- return b == &PyBaseObject_Type;
-}
-static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) {
- PyObject *mro;
- if (a == b) return 1;
- mro = a->tp_mro;
- if (likely(mro)) {
- Py_ssize_t i, n;
- n = PyTuple_GET_SIZE(mro);
- for (i = 0; i < n; i++) {
- if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b)
- return 1;
- }
- return 0;
- }
- return __Pyx_InBases(a, b);
-}
-#if PY_MAJOR_VERSION == 2
-static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) {
- PyObject *exception, *value, *tb;
- int res;
- __Pyx_PyThreadState_declare
- __Pyx_PyThreadState_assign
- __Pyx_ErrFetch(&exception, &value, &tb);
- res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0;
- if (unlikely(res == -1)) {
- PyErr_WriteUnraisable(err);
- res = 0;
- }
- if (!res) {
- res = PyObject_IsSubclass(err, exc_type2);
- if (unlikely(res == -1)) {
- PyErr_WriteUnraisable(err);
- res = 0;
- }
- }
- __Pyx_ErrRestore(exception, value, tb);
- return res;
-}
-#else
-static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) {
- int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0;
- if (!res) {
- res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2);
- }
- return res;
-}
-#endif
-static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {
- Py_ssize_t i, n;
- assert(PyExceptionClass_Check(exc_type));
- n = PyTuple_GET_SIZE(tuple);
-#if PY_MAJOR_VERSION >= 3
- for (i=0; i '9');
- break;
- }
- if (rt_from_call[i] != ctversion[i]) {
- same = 0;
- break;
- }
- }
- if (!same) {
- char rtversion[5] = {'\0'};
- char message[200];
- for (i=0; i<4; ++i) {
- if (rt_from_call[i] == '.') {
- if (found_dot) break;
- found_dot = 1;
- } else if (rt_from_call[i] < '0' || rt_from_call[i] > '9') {
- break;
- }
- rtversion[i] = rt_from_call[i];
- }
- PyOS_snprintf(message, sizeof(message),
- "compiletime version %s of module '%.100s' "
- "does not match runtime version %s",
- ctversion, __Pyx_MODULE_NAME, rtversion);
- return PyErr_WarnEx(NULL, message, 1);
- }
- return 0;
-}
-
-/* InitStrings */
-static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) {
- while (t->p) {
- #if PY_MAJOR_VERSION < 3
- if (t->is_unicode) {
- *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL);
- } else if (t->intern) {
- *t->p = PyString_InternFromString(t->s);
- } else {
- *t->p = PyString_FromStringAndSize(t->s, t->n - 1);
- }
- #else
- if (t->is_unicode | t->is_str) {
- if (t->intern) {
- *t->p = PyUnicode_InternFromString(t->s);
- } else if (t->encoding) {
- *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL);
- } else {
- *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1);
- }
- } else {
- *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1);
- }
- #endif
- if (!*t->p)
- return -1;
- if (PyObject_Hash(*t->p) == -1)
- return -1;
- ++t;
- }
- return 0;
-}
-
-static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) {
- return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str));
-}
-static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) {
- Py_ssize_t ignore;
- return __Pyx_PyObject_AsStringAndSize(o, &ignore);
-}
-#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
-#if !CYTHON_PEP393_ENABLED
-static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
- char* defenc_c;
- PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL);
- if (!defenc) return NULL;
- defenc_c = PyBytes_AS_STRING(defenc);
-#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
- {
- char* end = defenc_c + PyBytes_GET_SIZE(defenc);
- char* c;
- for (c = defenc_c; c < end; c++) {
- if ((unsigned char) (*c) >= 128) {
- PyUnicode_AsASCIIString(o);
- return NULL;
- }
- }
- }
-#endif
- *length = PyBytes_GET_SIZE(defenc);
- return defenc_c;
-}
-#else
-static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
- if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL;
-#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
- if (likely(PyUnicode_IS_ASCII(o))) {
- *length = PyUnicode_GET_LENGTH(o);
- return PyUnicode_AsUTF8(o);
- } else {
- PyUnicode_AsASCIIString(o);
- return NULL;
- }
-#else
- return PyUnicode_AsUTF8AndSize(o, length);
-#endif
-}
-#endif
-#endif
-static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) {
-#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
- if (
-#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
- __Pyx_sys_getdefaultencoding_not_ascii &&
-#endif
- PyUnicode_Check(o)) {
- return __Pyx_PyUnicode_AsStringAndSize(o, length);
- } else
-#endif
-#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE))
- if (PyByteArray_Check(o)) {
- *length = PyByteArray_GET_SIZE(o);
- return PyByteArray_AS_STRING(o);
- } else
-#endif
- {
- char* result;
- int r = PyBytes_AsStringAndSize(o, &result, length);
- if (unlikely(r < 0)) {
- return NULL;
- } else {
- return result;
- }
- }
-}
-static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) {
- int is_true = x == Py_True;
- if (is_true | (x == Py_False) | (x == Py_None)) return is_true;
- else return PyObject_IsTrue(x);
-}
-static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) {
- int retval;
- if (unlikely(!x)) return -1;
- retval = __Pyx_PyObject_IsTrue(x);
- Py_DECREF(x);
- return retval;
-}
-static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) {
-#if PY_MAJOR_VERSION >= 3
- if (PyLong_Check(result)) {
- if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
- "__int__ returned non-int (type %.200s). "
- "The ability to return an instance of a strict subclass of int "
- "is deprecated, and may be removed in a future version of Python.",
- Py_TYPE(result)->tp_name)) {
- Py_DECREF(result);
- return NULL;
- }
- return result;
- }
-#endif
- PyErr_Format(PyExc_TypeError,
- "__%.4s__ returned non-%.4s (type %.200s)",
- type_name, type_name, Py_TYPE(result)->tp_name);
- Py_DECREF(result);
- return NULL;
-}
-static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) {
-#if CYTHON_USE_TYPE_SLOTS
- PyNumberMethods *m;
-#endif
- const char *name = NULL;
- PyObject *res = NULL;
-#if PY_MAJOR_VERSION < 3
- if (likely(PyInt_Check(x) || PyLong_Check(x)))
-#else
- if (likely(PyLong_Check(x)))
-#endif
- return __Pyx_NewRef(x);
-#if CYTHON_USE_TYPE_SLOTS
- m = Py_TYPE(x)->tp_as_number;
- #if PY_MAJOR_VERSION < 3
- if (m && m->nb_int) {
- name = "int";
- res = m->nb_int(x);
- }
- else if (m && m->nb_long) {
- name = "long";
- res = m->nb_long(x);
- }
- #else
- if (likely(m && m->nb_int)) {
- name = "int";
- res = m->nb_int(x);
- }
- #endif
-#else
- if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) {
- res = PyNumber_Int(x);
- }
-#endif
- if (likely(res)) {
-#if PY_MAJOR_VERSION < 3
- if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) {
-#else
- if (unlikely(!PyLong_CheckExact(res))) {
-#endif
- return __Pyx_PyNumber_IntOrLongWrongResultType(res, name);
- }
- }
- else if (!PyErr_Occurred()) {
- PyErr_SetString(PyExc_TypeError,
- "an integer is required");
- }
- return res;
-}
-static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) {
- Py_ssize_t ival;
- PyObject *x;
-#if PY_MAJOR_VERSION < 3
- if (likely(PyInt_CheckExact(b))) {
- if (sizeof(Py_ssize_t) >= sizeof(long))
- return PyInt_AS_LONG(b);
- else
- return PyInt_AsSsize_t(b);
- }
-#endif
- if (likely(PyLong_CheckExact(b))) {
- #if CYTHON_USE_PYLONG_INTERNALS
- const digit* digits = ((PyLongObject*)b)->ob_digit;
- const Py_ssize_t size = Py_SIZE(b);
- if (likely(__Pyx_sst_abs(size) <= 1)) {
- ival = likely(size) ? digits[0] : 0;
- if (size == -1) ival = -ival;
- return ival;
- } else {
- switch (size) {
- case 2:
- if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {
- return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- case -2:
- if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) {
- return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- case 3:
- if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {
- return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- case -3:
- if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) {
- return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- case 4:
- if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {
- return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- case -4:
- if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) {
- return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0]));
- }
- break;
- }
- }
- #endif
- return PyLong_AsSsize_t(b);
- }
- x = PyNumber_Index(b);
- if (!x) return -1;
- ival = PyInt_AsSsize_t(x);
- Py_DECREF(x);
- return ival;
-}
-static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) {
- if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) {
- return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o);
-#if PY_MAJOR_VERSION < 3
- } else if (likely(PyInt_CheckExact(o))) {
- return PyInt_AS_LONG(o);
-#endif
- } else {
- Py_ssize_t ival;
- PyObject *x;
- x = PyNumber_Index(o);
- if (!x) return -1;
- ival = PyInt_AsLong(x);
- Py_DECREF(x);
- return ival;
- }
-}
-static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) {
- return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False);
-}
-static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) {
- return PyInt_FromSize_t(ival);
-}
-
-
-#endif /* Py_PYTHON_H */
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_p_r_e_p.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_p_r_e_p.py
deleted file mode 100644
index b4b92f3e924ba2f20ade9a6cca45ce78284ffe21..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_p_r_e_p.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from fontTools import ttLib
-
-superclass = ttLib.getTableClass("fpgm")
-
-
-class table__p_r_e_p(superclass):
- pass
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/archive.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/archive.py
deleted file mode 100644
index dc5c1490b972c592fd3eb9aaeb30b589e384ccb7..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/archive.py
+++ /dev/null
@@ -1,73 +0,0 @@
-from fsspec import AbstractFileSystem
-from fsspec.utils import tokenize
-
-
-class AbstractArchiveFileSystem(AbstractFileSystem):
- """
- A generic superclass for implementing Archive-based filesystems.
-
- Currently, it is shared amongst
- :class:`~fsspec.implementations.zip.ZipFileSystem`,
- :class:`~fsspec.implementations.libarchive.LibArchiveFileSystem` and
- :class:`~fsspec.implementations.tar.TarFileSystem`.
- """
-
- def __str__(self):
- return "" % (type(self).__name__, id(self))
-
- __repr__ = __str__
-
- def ukey(self, path):
- return tokenize(path, self.fo, self.protocol)
-
- def _all_dirnames(self, paths):
- """Returns *all* directory names for each path in paths, including intermediate
- ones.
-
- Parameters
- ----------
- paths: Iterable of path strings
- """
- if len(paths) == 0:
- return set()
-
- dirnames = {self._parent(path) for path in paths} - {self.root_marker}
- return dirnames | self._all_dirnames(dirnames)
-
- def info(self, path, **kwargs):
- self._get_dirs()
- path = self._strip_protocol(path)
- if path in {"", "/"} and self.dir_cache:
- return {"name": "/", "type": "directory", "size": 0}
- if path in self.dir_cache:
- return self.dir_cache[path]
- elif path + "/" in self.dir_cache:
- return self.dir_cache[path + "/"]
- else:
- raise FileNotFoundError(path)
-
- def ls(self, path, detail=True, **kwargs):
- self._get_dirs()
- paths = {}
- for p, f in self.dir_cache.items():
- p = p.rstrip("/")
- if "/" in p:
- root = p.rsplit("/", 1)[0]
- else:
- root = ""
- if root == path.rstrip("/"):
- paths[p] = f
- elif all(
- (a == b)
- for a, b in zip(path.split("/"), [""] + p.strip("/").split("/"))
- ):
- # root directory entry
- ppath = p.rstrip("/").split("/", 1)[0]
- if ppath not in paths:
- out = {"name": ppath + "/", "size": 0, "type": "directory"}
- paths[ppath] = out
- out = sorted(paths.values(), key=lambda _: _["name"])
- if detail:
- return out
- else:
- return [f["name"] for f in out]
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3ca142e0.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3ca142e0.css
deleted file mode 100644
index 77ebe6c1fea2e3557f76088bb9f5c30e2cfdb72a..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-3ca142e0.css
+++ /dev/null
@@ -1 +0,0 @@
-.spacer.svelte-1kspdo{display:inline-block;width:0;height:0}.json-node.svelte-1kspdo{display:inline;color:var(--body-text-color);line-height:var(--line-sm);font-family:var(--font-mono)}.expand-array.svelte-1kspdo{border:1px solid var(--border-color-primary);border-radius:var(--radius-sm);background:var(--background-fill-secondary);padding:0 var(--size-1);color:var(--body-text-color)}.expand-array.svelte-1kspdo:hover{background:var(--background-fill-primary)}.children.svelte-1kspdo{padding-left:var(--size-4)}.json-item.svelte-1kspdo{display:inline}.null.svelte-1kspdo{color:var(--body-text-color-subdued)}.string.svelte-1kspdo{color:var(--color-green-500)}.number.svelte-1kspdo{color:var(--color-blue-500)}.bool.svelte-1kspdo{color:var(--color-red-500)}.json-holder.svelte-1trjy9a{padding:var(--size-2)}button.svelte-1trjy9a{display:flex;position:absolute;top:var(--block-label-margin);right:var(--block-label-margin);align-items:center;box-shadow:var(--shadow-drop);border:1px solid var(--border-color-primary);border-top:none;border-right:none;border-radius:var(--block-label-right-radius);background:var(--block-label-background-fill);padding:5px;width:22px;height:22px;overflow:hidden;color:var(--block-label-text-color);font:var(--font);font-size:var(--button-small-text-size)}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ff630227.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ff630227.js
deleted file mode 100644
index b751b8d21ad15166f14450721149a6e971887d91..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ff630227.js
+++ /dev/null
@@ -1,3 +0,0 @@
-import{S as I,e as J,s as K,J as U,K as u,p as j,M as y,n as P,A as E,N as R,O as V,P as D,L as F,Z as Le,ar as je,R as G,G as T,m as Z,V as Y,B as be,C as Ee,av as Q,aj as Ae,X as Ce,k as O,o as X,z as B,v as S,x as q,E as Me,ae as ze,q as Te,r as Be,u as pe,y as ke}from"./index-3370be2a.js";import{U as Se}from"./Upload-f29b2460.js";import{M as Ue}from"./ModifyUpload-d8fc50ab.js";import{B as Ne}from"./Button-89624748.js";import{B as Fe}from"./BlockLabel-56db415e.js";import{E as Oe}from"./Empty-585389a4.js";import{g as Xe}from"./color-baaf9df5.js";import{a as qe}from"./csv-b0b7514a.js";import{Z as x,_ as $,l as ee}from"./linear-58a44b5e.js";import{U as He}from"./UploadText-28892309.js";import"./Blocks-f0129fcd.js";import"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";import"./IconButton-abe5ede9.js";import"./dsv-576afacd.js";function Pe(l){let e,n,t;return{c(){e=U("svg"),n=U("path"),t=U("path"),u(n,"d","M28.828 3.172a4.094 4.094 0 0 0-5.656 0L4.05 22.292A6.954 6.954 0 0 0 2 27.242V30h2.756a6.952 6.952 0 0 0 4.95-2.05L28.828 8.829a3.999 3.999 0 0 0 0-5.657zM10.91 18.26l2.829 2.829l-2.122 2.121l-2.828-2.828zm-2.619 8.276A4.966 4.966 0 0 1 4.756 28H4v-.759a4.967 4.967 0 0 1 1.464-3.535l1.91-1.91l2.829 2.828zM27.415 7.414l-12.261 12.26l-2.829-2.828l12.262-12.26a2.047 2.047 0 0 1 2.828 0a2 2 0 0 1 0 2.828z"),u(n,"fill","currentColor"),u(t,"d","M6.5 15a3.5 3.5 0 0 1-2.475-5.974l3.5-3.5a1.502 1.502 0 0 0 0-2.121a1.537 1.537 0 0 0-2.121 0L3.415 5.394L2 3.98l1.99-1.988a3.585 3.585 0 0 1 4.95 0a3.504 3.504 0 0 1 0 4.949L5.439 10.44a1.502 1.502 0 0 0 0 2.121a1.537 1.537 0 0 0 2.122 0l4.024-4.024L13 9.95l-4.025 4.024A3.475 3.475 0 0 1 6.5 15z"),u(t,"fill","currentColor"),u(e,"width","1em"),u(e,"height","1em"),u(e,"viewBox","0 0 32 32")},m(a,s){j(a,e,s),y(e,n),y(e,t)},p:P,i:P,o:P,d(a){a&&E(e)}}}let ye=class extends I{constructor(e){super(),J(this,e,null,Pe,K,{})}};function le(l){let e;return Array.isArray(l)?e=l.reduce((n,{values:t})=>[...n,...t.map(({y:a})=>a)],[]):e=l.values,[Math.min(...e),Math.max(...e)]}function te(l,e,n){const t=Object.entries(l[0]).reduce((a,s,o)=>(!e&&o===0||e&&s[0]===e?a.x.name=s[0]:(!n||n&&n.includes(s[0]))&&a.y.push({name:s[0],values:[]}),a),{x:{name:"",values:[]},y:[]});for(let a=0;al[6].call(e))},m(o,_){j(o,e,_),y(e,n),y(e,t),y(e,a),s=je(e,l[6].bind(e))},p(o,[_]){_&8&&F(n,"background",o[3]),_&1&&G(a,o[0]),_&36&&F(e,"top",o[2]-o[5]/2+"px"),_&18&&F(e,"left",o[1]-o[4]-7+"px")},i:P,o:P,d(o){o&&E(e),s()}}}function Ve(l,e,n){let{text:t}=e,{x:a}=e,{y:s}=e,{color:o}=e,_,i;function v(){_=this.offsetWidth,i=this.offsetHeight,n(4,_),n(5,i)}return l.$$set=g=>{"text"in g&&n(0,t=g.text),"x"in g&&n(1,a=g.x),"y"in g&&n(2,s=g.y),"color"in g&&n(3,o=g.color)},[t,a,s,o,_,i,v]}class Ye extends I{constructor(e){super(),J(this,e,Ve,Re,K,{text:0,x:1,y:2,color:3})}}function Ze(l,{color:e,text:n}){let t;function a(i){return t=new Ye({props:{text:n,x:i.pageX,y:i.pageY,color:e},target:document.body}),i}function s(i){t.$set({x:i.pageX,y:i.pageY})}function o(){t.$destroy()}const _=l;return _.addEventListener("mouseover",a),_.addEventListener("mouseleave",o),_.addEventListener("mousemove",s),{destroy(){_.removeEventListener("mouseover",a),_.removeEventListener("mouseleave",o),_.removeEventListener("mousemove",s)}}}function ne(l,e,n){const t=l.slice();t[16]=e[n].name,t[17]=e[n].values;const a=t[8][t[16]];return t[18]=a,t}function ae(l,e,n){const t=l.slice();return t[0]=e[n].x,t[1]=e[n].y,t}function oe(l,e,n){const t=l.slice();t[16]=e[n].name,t[17]=e[n].values;const a=t[8][t[16]];return t[18]=a,t}function se(l,e,n){const t=l.slice();return t[0]=e[n].x,t[1]=e[n].y,t}function re(l,e,n){const t=l.slice();return t[27]=e[n],t}function ie(l,e,n){const t=l.slice();return t[27]=e[n],t}function fe(l,e,n){const t=l.slice();return t[16]=e[n].name,t}function _e(l){let e,n,t,a=l[16]+"",s,o;return{c(){e=R("div"),n=R("span"),t=V(),s=D(a),o=V(),u(n,"class","legend-box svelte-1mjxput"),F(n,"background-color",l[8][l[16]]),u(e,"class","legend-item svelte-1mjxput")},m(_,i){j(_,e,i),y(e,n),y(e,t),y(e,s),y(e,o)},p(_,i){i[0]&260&&F(n,"background-color",_[8][_[16]]),i[0]&4&&a!==(a=_[16]+"")&&G(s,a)},d(_){_&&E(e)}}}function ue(l){let e,n,t,a,s,o,_=l[27]+"",i,v,g;return{c(){e=U("line"),o=U("text"),i=D(_),u(e,"stroke-width","0.5"),u(e,"x1",n=l[5](l[27])),u(e,"x2",t=l[5](l[27])),u(e,"y1",a=l[4](l[9][0]l[9][l[9].length-1]?l[6][1]:l[9][l[9].length-1])),u(e,"stroke","#aaa"),u(o,"class","label-text svelte-1mjxput"),u(o,"text-anchor","middle"),u(o,"x",v=l[5](l[27])),u(o,"y",g=l[4](l[9][0])+30)},m(f,h){j(f,e,h),j(f,o,h),y(o,i)},p(f,h){h[0]&1056&&n!==(n=f[5](f[27]))&&u(e,"x1",n),h[0]&1056&&t!==(t=f[5](f[27]))&&u(e,"x2",t),h[0]&592&&a!==(a=f[4](f[9][0]f[9][f[9].length-1]?f[6][1]:f[9][f[9].length-1]))&&u(e,"y2",s),h[0]&1024&&_!==(_=f[27]+"")&&G(i,_),h[0]&1056&&v!==(v=f[5](f[27]))&&u(o,"x",v),h[0]&528&&g!==(g=f[4](f[9][0])+30)&&u(o,"y",g)},d(f){f&&(E(e),E(o))}}}function ce(l){let e,n,t,a,s,o,_=l[27]+"",i,v,g;return{c(){e=U("line"),o=U("text"),i=D(_),u(e,"stroke-width","0.5"),u(e,"y1",n=l[4](l[27])),u(e,"y2",t=l[4](l[27])),u(e,"x1",a=l[5](l[10][0]l[10][l[10].length-1]?l[7][1]:l[10][l[10].length-1])),u(e,"stroke","#aaa"),u(o,"class","label-text svelte-1mjxput"),u(o,"text-anchor","end"),u(o,"y",v=l[4](l[27])+4),u(o,"x",g=l[5](l[10][0])-20)},m(f,h){j(f,e,h),j(f,o,h),y(o,i)},p(f,h){h[0]&528&&n!==(n=f[4](f[27]))&&u(e,"y1",n),h[0]&528&&t!==(t=f[4](f[27]))&&u(e,"y2",t),h[0]&1184&&a!==(a=f[5](f[10][0]f[10][f[10].length-1]?f[7][1]:f[10][f[10].length-1]))&&u(e,"x2",s),h[0]&512&&_!==(_=f[27]+"")&&G(i,_),h[0]&528&&v!==(v=f[4](f[27])+4)&&u(o,"y",v),h[0]&1056&&g!==(g=f[5](f[10][0])-20)&&u(o,"x",g)},d(f){f&&(E(e),E(o))}}}function me(l){let e,n,t,a,s,o,_=l[6][1]+"",i,v,g;return{c(){e=U("line"),o=U("text"),i=D(_),u(e,"stroke-width","0.5"),u(e,"y1",n=l[4](l[6][1])),u(e,"y2",t=l[4](l[6][1])),u(e,"x1",a=l[5](l[10][0])),u(e,"x2",s=l[5](l[7][1])),u(e,"stroke","#aaa"),u(o,"class","label-text svelte-1mjxput"),u(o,"text-anchor","end"),u(o,"y",v=l[4](l[6][1])+4),u(o,"x",g=l[5](l[10][0])-20)},m(f,h){j(f,e,h),j(f,o,h),y(o,i)},p(f,h){h[0]&80&&n!==(n=f[4](f[6][1]))&&u(e,"y1",n),h[0]&80&&t!==(t=f[4](f[6][1]))&&u(e,"y2",t),h[0]&1056&&a!==(a=f[5](f[10][0]))&&u(e,"x1",a),h[0]&160&&s!==(s=f[5](f[7][1]))&&u(e,"x2",s),h[0]&64&&_!==(_=f[6][1]+"")&&G(i,_),h[0]&80&&v!==(v=f[4](f[6][1])+4)&&u(o,"y",v),h[0]&1056&&g!==(g=f[5](f[10][0])-20)&&u(o,"x",g)},d(f){f&&(E(e),E(o))}}}function he(l){let e,n,t,a;return{c(){e=U("circle"),u(e,"r","3.5"),u(e,"cx",n=l[5](l[0])),u(e,"cy",t=l[4](l[1])),u(e,"stroke-width","1.5"),u(e,"stroke",a=l[18]),u(e,"fill","none")},m(s,o){j(s,e,o)},p(s,o){o[0]&36&&n!==(n=s[5](s[0]))&&u(e,"cx",n),o[0]&20&&t!==(t=s[4](s[1]))&&u(e,"cy",t),o[0]&260&&a!==(a=s[18])&&u(e,"stroke",a)},d(s){s&&E(e)}}}function ge(l){let e,n,t,a=T(l[17]),s=[];for(let o=0;ol[9][l[9].length-1]&&me(l),C=T(l[2]),L=[];for(let c=0;cc[9][c[9].length-1]?d?d.p(c,z):(d=me(c),d.c(),d.m(s,null)):d&&(d.d(1),d=null),z[0]&308){C=T(c[2]);let r;for(r=0;r{b("process",{x:t,y:a})});const k=({x:d,y:C})=>[_(d),i(C)];return l.$$set=d=>{"value"in d&&n(11,f=d.value),"x"in d&&n(0,h=d.x),"y"in d&&n(1,A=d.y),"colors"in d&&n(12,m=d.colors)},l.$$.update=()=>{l.$$.dirty[0]&2051&&n(3,{x:t,y:a}=te(typeof f=="string"?qe(f):f,h,A),t,(n(2,a),n(11,f),n(0,h),n(1,A))),l.$$.dirty[0]&8&&n(7,s=le(t)),l.$$.dirty[0]&4&&n(6,o=le(a)),l.$$.dirty[0]&128&&n(5,_=x(s,[0,600]).nice()),l.$$.dirty[0]&64&&n(4,i=x(o,[350,0]).nice()),l.$$.dirty[0]&32&&n(10,v=_.ticks(8)),l.$$.dirty[0]&16&&n(9,g=i.ticks(8)),l.$$.dirty[0]&4&&n(8,p=a.reduce((d,C,L)=>({...d,[C.name]:N(L)}),{}))},[h,A,a,t,i,_,o,s,p,g,v,f,m,k]}class we extends I{constructor(e){super(),J(this,e,Ge,De,K,{value:11,x:0,y:1,colors:12},null,[-1,-1])}}function Ie(l){let e,n;return e=new Se({props:{filetype:"text/csv",include_file_metadata:!1,$$slots:{default:[We]},$$scope:{ctx:l}}}),e.$on("load",l[19]),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},p(t,a){const s={};a&8388608&&(s.$$scope={dirty:a,ctx:t}),e.$set(s)},i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function Je(l){let e,n,t,a,s;return n=new Ue({}),n.$on("clear",l[17]),a=new we({props:{value:l[14],y:l[4],x:l[5],colors:l[9]}}),a.$on("process",l[18]),{c(){e=R("div"),O(n.$$.fragment),t=V(),O(a.$$.fragment),u(e,"class","chart svelte-etmurc")},m(o,_){j(o,e,_),X(n,e,null),y(e,t),X(a,e,null),s=!0},p(o,_){const i={};_&16384&&(i.value=o[14]),_&16&&(i.y=o[4]),_&32&&(i.x=o[5]),_&512&&(i.colors=o[9]),a.$set(i)},i(o){s||(B(n.$$.fragment,o),B(a.$$.fragment,o),s=!0)},o(o){S(n.$$.fragment,o),S(a.$$.fragment,o),s=!1},d(o){o&&E(e),q(n),q(a)}}}function Ke(l){let e,n,t,a;const s=[xe,Qe],o=[];function _(i,v){return i[15]?0:1}return e=_(l),n=o[e]=s[e](l),{c(){n.c(),t=Z()},m(i,v){o[e].m(i,v),j(i,t,v),a=!0},p(i,v){let g=e;e=_(i),e===g?o[e].p(i,v):(pe(),S(o[g],1,1,()=>{o[g]=null}),ke(),n=o[e],n?n.p(i,v):(n=o[e]=s[e](i),n.c()),B(n,1),n.m(t.parentNode,t))},i(i){a||(B(n),a=!0)},o(i){S(n),a=!1},d(i){i&&E(t),o[e].d(i)}}}function We(l){let e,n;return e=new He({props:{type:"csv"}}),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},p:P,i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function Qe(l){let e,n;return e=new Oe({props:{unpadded_box:!0,size:"large",$$slots:{default:[$e]},$$scope:{ctx:l}}}),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},p(t,a){const s={};a&8388608&&(s.$$scope={dirty:a,ctx:t}),e.$set(s)},i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function xe(l){let e,n;return e=new we({props:{value:l[15],colors:l[9]}}),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},p(t,a){const s={};a&32768&&(s.value=t[15]),a&512&&(s.colors=t[9]),e.$set(s)},i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function $e(l){let e,n;return e=new ye({}),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function el(l){let e,n,t,a,s,o,_,i;e=new Fe({props:{show_label:l[8],Icon:ye,label:l[7]||"TimeSeries"}});const v=[l[13]];let g={};for(let m=0;m{h[k]=null}),ke()),~s?(o=h[s],o?o.p(m,b):(o=h[s]=f[s](m),o.c()),B(o,1),o.m(_.parentNode,_)):o=null)},i(m){i||(B(e.$$.fragment,m),B(t.$$.fragment,m),B(o),i=!0)},o(m){S(e.$$.fragment,m),S(t.$$.fragment,m),S(o),i=!1},d(m){m&&(E(n),E(a),E(_)),q(e,m),q(t,m),~s&&h[s].d(m)}}}function ll(l){let e,n;return e=new Ne({props:{visible:l[3],variant:l[6]==="dynamic"&&!l[14]?"dashed":"solid",padding:!1,elem_id:l[1],elem_classes:l[2],container:l[10],scale:l[11],min_width:l[12],$$slots:{default:[el]},$$scope:{ctx:l}}}),{c(){O(e.$$.fragment)},m(t,a){X(e,t,a),n=!0},p(t,[a]){const s={};a&8&&(s.visible=t[3]),a&16448&&(s.variant=t[6]==="dynamic"&&!t[14]?"dashed":"solid"),a&2&&(s.elem_id=t[1]),a&4&&(s.elem_classes=t[2]),a&1024&&(s.container=t[10]),a&2048&&(s.scale=t[11]),a&4096&&(s.min_width=t[12]),a&8446961&&(s.$$scope={dirty:a,ctx:t}),e.$set(s)},i(t){n||(B(e.$$.fragment,t),n=!0)},o(t){S(e.$$.fragment,t),n=!1},d(t){q(e,t)}}}function tl(l){return l.data.map(e=>e.reduce((n,t,a)=>({...n,[l.headers[a]]:t}),{}))}function nl(l){const e=atob(l.split(",")[1]),n=l.split(",")[0].split(":")[1].split(";")[0],t=new ArrayBuffer(e.length),a=new Uint8Array(t);for(let s=0;sn.push(a));for(let a=0;as.push(o[a].y)),t.push(s)}return{headers:n,data:t}}function ol(l,e,n){let t;const a=be();let{elem_id:s=""}=e,{elem_classes:o=[]}=e,{visible:_=!0}=e,{value:i}=e,{y:v}=e,{x:g}=e,{mode:f}=e,{label:h}=e,{show_label:A}=e,{colors:m}=e,{container:b=!0}=e,{scale:p=null}=e,{min_width:N=void 0}=e,{loading_status:k}=e,d;function C(r){const w=new FileReader;w.addEventListener("loadend",W=>{n(14,d=W.srcElement.result)}),w.readAsText(r)}function L(r){r.headers&&n(14,d=r.headers.join(",")),r.data.forEach(W=>{n(14,d=d+`
-`),n(14,d=d+W.join(","))})}function H(r){return n(0,i={data:r}),r}function M({detail:r}){n(0,i=null),a("change"),a("clear")}const c=({detail:{x:r,y:w}})=>n(0,i=al(r,w)),z=({detail:r})=>H(r);return l.$$set=r=>{"elem_id"in r&&n(1,s=r.elem_id),"elem_classes"in r&&n(2,o=r.elem_classes),"visible"in r&&n(3,_=r.visible),"value"in r&&n(0,i=r.value),"y"in r&&n(4,v=r.y),"x"in r&&n(5,g=r.x),"mode"in r&&n(6,f=r.mode),"label"in r&&n(7,h=r.label),"show_label"in r&&n(8,A=r.show_label),"colors"in r&&n(9,m=r.colors),"container"in r&&n(10,b=r.container),"scale"in r&&n(11,p=r.scale),"min_width"in r&&n(12,N=r.min_width),"loading_status"in r&&n(13,k=r.loading_status)},l.$$.update=()=>{l.$$.dirty&1&&(i&&i.data&&typeof i.data=="string"?i?C(nl(i.data)):n(14,d=null):i&&i.data&&typeof i.data!="string"&&(i||n(14,d=null),L(i))),l.$$.dirty&16385&&n(14,d=i==null?null:d),l.$$.dirty&65&&n(15,t=f==="static"&&i&&tl(i)),l.$$.dirty&1&&a("change")},[i,s,o,_,v,g,f,h,A,m,b,p,N,k,d,t,H,M,c,z]}class sl extends I{constructor(e){super(),J(this,e,ol,ll,K,{elem_id:1,elem_classes:2,visible:3,value:0,y:4,x:5,mode:6,label:7,show_label:8,colors:9,container:10,scale:11,min_width:12,loading_status:13})}}const wl=sl,Ll=["static","dynamic"],jl=l=>({type:{payload:"{data: Array> | string; headers?: Array;}"},description:{payload:"dataset of series"}});export{wl as Component,jl as document,Ll as modes};
-//# sourceMappingURL=index-ff630227.js.map
diff --git a/spaces/DataScienceGuild/ARIMA_test/README.md b/spaces/DataScienceGuild/ARIMA_test/README.md
deleted file mode 100644
index 4f31bf941a3542f1c0fb652b9b3bd22e3c64b4c4..0000000000000000000000000000000000000000
--- a/spaces/DataScienceGuild/ARIMA_test/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: ARIMA Test
-emoji: 🌖
-colorFrom: green
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/swg_transformer.py b/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/swg_transformer.py
deleted file mode 100644
index aa368e3616058b30419cc6249862a816f7252fed..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/3D-Room-Layout-Estimation_LGT-Net/models/modules/swg_transformer.py
+++ /dev/null
@@ -1,49 +0,0 @@
-from models.modules.transformer_modules import *
-
-
-class SWG_Transformer(nn.Module):
- def __init__(self, dim, depth, heads, win_size, dim_head, mlp_dim,
- dropout=0., patch_num=None, ape=None, rpe=None, rpe_pos=1):
- super().__init__()
- self.absolute_pos_embed = None if patch_num is None or ape is None else AbsolutePosition(dim, dropout,
- patch_num, ape)
- self.pos_dropout = nn.Dropout(dropout)
- self.layers = nn.ModuleList([])
- for i in range(depth):
- if i % 2 == 0:
- attention = WinAttention(dim, win_size=win_size, shift=0 if (i % 3 == 0) else win_size // 2,
- heads=heads, dim_head=dim_head, dropout=dropout, rpe=rpe, rpe_pos=rpe_pos)
- else:
- attention = Attention(dim, heads=heads, dim_head=dim_head, dropout=dropout,
- patch_num=patch_num, rpe=rpe, rpe_pos=rpe_pos)
-
- self.layers.append(nn.ModuleList([
- PreNorm(dim, attention),
- PreNorm(dim, FeedForward(dim, mlp_dim, dropout=dropout)),
- ]))
-
- def forward(self, x):
- if self.absolute_pos_embed is not None:
- x = self.absolute_pos_embed(x)
- x = self.pos_dropout(x)
- for attn, ff in self.layers:
- x = attn(x) + x
- x = ff(x) + x
- return x
-
-
-if __name__ == '__main__':
- token_dim = 1024
- toke_len = 256
-
- transformer = SWG_Transformer(dim=token_dim,
- depth=6,
- heads=16,
- win_size=8,
- dim_head=64,
- mlp_dim=2048,
- dropout=0.1)
-
- input = torch.randn(1, toke_len, token_dim)
- output = transformer(input)
- print(output.shape)
diff --git a/spaces/Dorado607/ChuanhuChatGPT/run_Linux.sh b/spaces/Dorado607/ChuanhuChatGPT/run_Linux.sh
deleted file mode 100644
index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000
--- a/spaces/Dorado607/ChuanhuChatGPT/run_Linux.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash
-
-# 获取脚本所在目录
-script_dir=$(dirname "$(readlink -f "$0")")
-
-# 将工作目录更改为脚本所在目录
-cd "$script_dir" || exit
-
-# 检查Git仓库是否有更新
-git remote update
-pwd
-
-if ! git status -uno | grep 'up to date' > /dev/null; then
- # 如果有更新,关闭当前运行的服务器
- pkill -f ChuanhuChatbot.py
-
- # 拉取最新更改
- git pull
-
- # 安装依赖
- pip3 install -r requirements.txt
-
- # 重新启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
-
-# 检查ChuanhuChatbot.py是否在运行
-if ! pgrep -f ChuanhuChatbot.py > /dev/null; then
- # 如果没有运行,启动服务器
- nohup python3 ChuanhuChatbot.py &
-fi
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/__init__.py
deleted file mode 100644
index ece0ea08fe2e939cc260a1dafc0ab5b391b773d9..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-# empty
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/training/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/training/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/README.md b/spaces/EAraid12/LoRA-DreamBooth-Training-UI/README.md
deleted file mode 100644
index b61f96a3f0f5df541bd4e0dfba3a468ceb1c54e9..0000000000000000000000000000000000000000
--- a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: LoRA DreamBooth Training UI
-emoji: ⚡
-colorFrom: red
-colorTo: purple
-sdk: gradio
-sdk_version: 3.16.2
-python_version: 3.10.9
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: lora-library/LoRA-DreamBooth-Training-UI
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/models/realesrgan_model.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/models/realesrgan_model.py
deleted file mode 100644
index c298a09c42433177f90001a0a31d029576072ccd..0000000000000000000000000000000000000000
--- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/models/realesrgan_model.py
+++ /dev/null
@@ -1,258 +0,0 @@
-import numpy as np
-import random
-import torch
-from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt
-from basicsr.data.transforms import paired_random_crop
-from basicsr.models.srgan_model import SRGANModel
-from basicsr.utils import DiffJPEG, USMSharp
-from basicsr.utils.img_process_util import filter2D
-from basicsr.utils.registry import MODEL_REGISTRY
-from collections import OrderedDict
-from torch.nn import functional as F
-
-
-@MODEL_REGISTRY.register()
-class RealESRGANModel(SRGANModel):
- """RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It mainly performs:
- 1. randomly synthesize LQ images in GPU tensors
- 2. optimize the networks with GAN training.
- """
-
- def __init__(self, opt):
- super(RealESRGANModel, self).__init__(opt)
- self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts
- self.usm_sharpener = USMSharp().cuda() # do usm sharpening
- self.queue_size = opt.get('queue_size', 180)
-
- @torch.no_grad()
- def _dequeue_and_enqueue(self):
- """It is the training pair pool for increasing the diversity in a batch.
-
- Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a
- batch could not have different resize scaling factors. Therefore, we employ this training pair pool
- to increase the degradation diversity in a batch.
- """
- # initialize
- b, c, h, w = self.lq.size()
- if not hasattr(self, 'queue_lr'):
- assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}'
- self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda()
- _, c, h, w = self.gt.size()
- self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda()
- self.queue_ptr = 0
- if self.queue_ptr == self.queue_size: # the pool is full
- # do dequeue and enqueue
- # shuffle
- idx = torch.randperm(self.queue_size)
- self.queue_lr = self.queue_lr[idx]
- self.queue_gt = self.queue_gt[idx]
- # get first b samples
- lq_dequeue = self.queue_lr[0:b, :, :, :].clone()
- gt_dequeue = self.queue_gt[0:b, :, :, :].clone()
- # update the queue
- self.queue_lr[0:b, :, :, :] = self.lq.clone()
- self.queue_gt[0:b, :, :, :] = self.gt.clone()
-
- self.lq = lq_dequeue
- self.gt = gt_dequeue
- else:
- # only do enqueue
- self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone()
- self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone()
- self.queue_ptr = self.queue_ptr + b
-
- @torch.no_grad()
- def feed_data(self, data):
- """Accept data from dataloader, and then add two-order degradations to obtain LQ images.
- """
- if self.is_train and self.opt.get('high_order_degradation', True):
- # training data synthesis
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- self.kernel1 = data['kernel1'].to(self.device)
- self.kernel2 = data['kernel2'].to(self.device)
- self.sinc_kernel = data['sinc_kernel'].to(self.device)
-
- ori_h, ori_w = self.gt.size()[2:4]
-
- # ----------------------- The first degradation process ----------------------- #
- # blur
- out = filter2D(self.gt_usm, self.kernel1)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, scale_factor=scale, mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob']
- if np.random.uniform() < self.opt['gaussian_noise_prob']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range'])
- out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
- out = self.jpeger(out, quality=jpeg_p)
-
- # ----------------------- The second degradation process ----------------------- #
- # blur
- if np.random.uniform() < self.opt['second_blur_prob']:
- out = filter2D(out, self.kernel2)
- # random resize
- updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0]
- if updown_type == 'up':
- scale = np.random.uniform(1, self.opt['resize_range2'][1])
- elif updown_type == 'down':
- scale = np.random.uniform(self.opt['resize_range2'][0], 1)
- else:
- scale = 1
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(
- out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode)
- # add noise
- gray_noise_prob = self.opt['gray_noise_prob2']
- if np.random.uniform() < self.opt['gaussian_noise_prob2']:
- out = random_add_gaussian_noise_pt(
- out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob)
- else:
- out = random_add_poisson_noise_pt(
- out,
- scale_range=self.opt['poisson_scale_range2'],
- gray_prob=gray_noise_prob,
- clip=True,
- rounds=False)
-
- # JPEG compression + the final sinc filter
- # We also need to resize images to desired sizes. We group [resize back + sinc filter] together
- # as one operation.
- # We consider two orders:
- # 1. [resize back + sinc filter] + JPEG compression
- # 2. JPEG compression + [resize back + sinc filter]
- # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
- if np.random.uniform() < 0.5:
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- else:
- # JPEG compression
- jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2'])
- out = torch.clamp(out, 0, 1)
- out = self.jpeger(out, quality=jpeg_p)
- # resize back + the final sinc filter
- mode = random.choice(['area', 'bilinear', 'bicubic'])
- out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode)
- out = filter2D(out, self.sinc_kernel)
-
- # clamp and round
- self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255.
-
- # random crop
- gt_size = self.opt['gt_size']
- (self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size,
- self.opt['scale'])
-
- # training pair pool
- self._dequeue_and_enqueue()
- # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue
- self.gt_usm = self.usm_sharpener(self.gt)
- self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract
- else:
- # for paired training or validation
- self.lq = data['lq'].to(self.device)
- if 'gt' in data:
- self.gt = data['gt'].to(self.device)
- self.gt_usm = self.usm_sharpener(self.gt)
-
- def nondist_validation(self, dataloader, current_iter, tb_logger, save_img):
- # do not use the synthetic process during validation
- self.is_train = False
- super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img)
- self.is_train = True
-
- def optimize_parameters(self, current_iter):
- # usm sharpening
- l1_gt = self.gt_usm
- percep_gt = self.gt_usm
- gan_gt = self.gt_usm
- if self.opt['l1_gt_usm'] is False:
- l1_gt = self.gt
- if self.opt['percep_gt_usm'] is False:
- percep_gt = self.gt
- if self.opt['gan_gt_usm'] is False:
- gan_gt = self.gt
-
- # optimize net_g
- for p in self.net_d.parameters():
- p.requires_grad = False
-
- self.optimizer_g.zero_grad()
- self.output = self.net_g(self.lq)
-
- l_g_total = 0
- loss_dict = OrderedDict()
- if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters):
- # pixel loss
- if self.cri_pix:
- l_g_pix = self.cri_pix(self.output, l1_gt)
- l_g_total += l_g_pix
- loss_dict['l_g_pix'] = l_g_pix
- # perceptual loss
- if self.cri_perceptual:
- l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt)
- if l_g_percep is not None:
- l_g_total += l_g_percep
- loss_dict['l_g_percep'] = l_g_percep
- if l_g_style is not None:
- l_g_total += l_g_style
- loss_dict['l_g_style'] = l_g_style
- # gan loss
- fake_g_pred = self.net_d(self.output)
- l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False)
- l_g_total += l_g_gan
- loss_dict['l_g_gan'] = l_g_gan
-
- l_g_total.backward()
- self.optimizer_g.step()
-
- # optimize net_d
- for p in self.net_d.parameters():
- p.requires_grad = True
-
- self.optimizer_d.zero_grad()
- # real
- real_d_pred = self.net_d(gan_gt)
- l_d_real = self.cri_gan(real_d_pred, True, is_disc=True)
- loss_dict['l_d_real'] = l_d_real
- loss_dict['out_d_real'] = torch.mean(real_d_pred.detach())
- l_d_real.backward()
- # fake
- fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9
- l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True)
- loss_dict['l_d_fake'] = l_d_fake
- loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach())
- l_d_fake.backward()
- self.optimizer_d.step()
-
- if self.ema_decay > 0:
- self.model_ema(decay=self.ema_decay)
-
- self.log_dict = self.reduce_loss_dict(loss_dict)
diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers_new.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers_new.py
deleted file mode 100644
index 44153b6a23399c6938affc61c71919eaa172bcee..0000000000000000000000000000000000000000
--- a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/layers_new.py
+++ /dev/null
@@ -1,125 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, stride, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ)
-
- def __call__(self, x):
- h = self.conv1(x)
- h = self.conv2(h)
-
- return h
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- # self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
-
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
-
- h = self.conv1(x)
- # h = self.conv2(h)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 12), activ=nn.ReLU, dropout=False):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ)
- self.conv3 = Conv2DBNActiv(
- nin, nout, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = Conv2DBNActiv(
- nin, nout, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = Conv2DBNActiv(
- nin, nout, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = Conv2DBNActiv(nout * 5, nout, 1, 1, 0, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
- out = self.bottleneck(out)
-
- if self.dropout is not None:
- out = self.dropout(out)
-
- return out
-
-
-class LSTMModule(nn.Module):
- def __init__(self, nin_conv, nin_lstm, nout_lstm):
- super(LSTMModule, self).__init__()
- self.conv = Conv2DBNActiv(nin_conv, 1, 1, 1, 0)
- self.lstm = nn.LSTM(
- input_size=nin_lstm, hidden_size=nout_lstm // 2, bidirectional=True
- )
- self.dense = nn.Sequential(
- nn.Linear(nout_lstm, nin_lstm), nn.BatchNorm1d(nin_lstm), nn.ReLU()
- )
-
- def forward(self, x):
- N, _, nbins, nframes = x.size()
- h = self.conv(x)[:, 0] # N, nbins, nframes
- h = h.permute(2, 0, 1) # nframes, N, nbins
- h, _ = self.lstm(h)
- h = self.dense(h.reshape(-1, h.size()[-1])) # nframes * N, nbins
- h = h.reshape(nframes, N, 1, nbins)
- h = h.permute(1, 2, 3, 0)
-
- return h
diff --git a/spaces/EronSamez/RVC_HFmeu/demucs/audio.py b/spaces/EronSamez/RVC_HFmeu/demucs/audio.py
deleted file mode 100644
index b29f156e4afb5fbda32c35777022caeadf50d711..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/demucs/audio.py
+++ /dev/null
@@ -1,172 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-import json
-import subprocess as sp
-from pathlib import Path
-
-import julius
-import numpy as np
-import torch
-
-from .utils import temp_filenames
-
-
-def _read_info(path):
- stdout_data = sp.check_output([
- 'ffprobe', "-loglevel", "panic",
- str(path), '-print_format', 'json', '-show_format', '-show_streams'
- ])
- return json.loads(stdout_data.decode('utf-8'))
-
-
-class AudioFile:
- """
- Allows to read audio from any format supported by ffmpeg, as well as resampling or
- converting to mono on the fly. See :method:`read` for more details.
- """
- def __init__(self, path: Path):
- self.path = Path(path)
- self._info = None
-
- def __repr__(self):
- features = [("path", self.path)]
- features.append(("samplerate", self.samplerate()))
- features.append(("channels", self.channels()))
- features.append(("streams", len(self)))
- features_str = ", ".join(f"{name}={value}" for name, value in features)
- return f"AudioFile({features_str})"
-
- @property
- def info(self):
- if self._info is None:
- self._info = _read_info(self.path)
- return self._info
-
- @property
- def duration(self):
- return float(self.info['format']['duration'])
-
- @property
- def _audio_streams(self):
- return [
- index for index, stream in enumerate(self.info["streams"])
- if stream["codec_type"] == "audio"
- ]
-
- def __len__(self):
- return len(self._audio_streams)
-
- def channels(self, stream=0):
- return int(self.info['streams'][self._audio_streams[stream]]['channels'])
-
- def samplerate(self, stream=0):
- return int(self.info['streams'][self._audio_streams[stream]]['sample_rate'])
-
- def read(self,
- seek_time=None,
- duration=None,
- streams=slice(None),
- samplerate=None,
- channels=None,
- temp_folder=None):
- """
- Slightly more efficient implementation than stempeg,
- in particular, this will extract all stems at once
- rather than having to loop over one file multiple times
- for each stream.
-
- Args:
- seek_time (float): seek time in seconds or None if no seeking is needed.
- duration (float): duration in seconds to extract or None to extract until the end.
- streams (slice, int or list): streams to extract, can be a single int, a list or
- a slice. If it is a slice or list, the output will be of size [S, C, T]
- with S the number of streams, C the number of channels and T the number of samples.
- If it is an int, the output will be [C, T].
- samplerate (int): if provided, will resample on the fly. If None, no resampling will
- be done. Original sampling rate can be obtained with :method:`samplerate`.
- channels (int): if 1, will convert to mono. We do not rely on ffmpeg for that
- as ffmpeg automatically scale by +3dB to conserve volume when playing on speakers.
- See https://sound.stackexchange.com/a/42710.
- Our definition of mono is simply the average of the two channels. Any other
- value will be ignored.
- temp_folder (str or Path or None): temporary folder to use for decoding.
-
-
- """
- streams = np.array(range(len(self)))[streams]
- single = not isinstance(streams, np.ndarray)
- if single:
- streams = [streams]
-
- if duration is None:
- target_size = None
- query_duration = None
- else:
- target_size = int((samplerate or self.samplerate()) * duration)
- query_duration = float((target_size + 1) / (samplerate or self.samplerate()))
-
- with temp_filenames(len(streams)) as filenames:
- command = ['ffmpeg', '-y']
- command += ['-loglevel', 'panic']
- if seek_time:
- command += ['-ss', str(seek_time)]
- command += ['-i', str(self.path)]
- for stream, filename in zip(streams, filenames):
- command += ['-map', f'0:{self._audio_streams[stream]}']
- if query_duration is not None:
- command += ['-t', str(query_duration)]
- command += ['-threads', '1']
- command += ['-f', 'f32le']
- if samplerate is not None:
- command += ['-ar', str(samplerate)]
- command += [filename]
-
- sp.run(command, check=True)
- wavs = []
- for filename in filenames:
- wav = np.fromfile(filename, dtype=np.float32)
- wav = torch.from_numpy(wav)
- wav = wav.view(-1, self.channels()).t()
- if channels is not None:
- wav = convert_audio_channels(wav, channels)
- if target_size is not None:
- wav = wav[..., :target_size]
- wavs.append(wav)
- wav = torch.stack(wavs, dim=0)
- if single:
- wav = wav[0]
- return wav
-
-
-def convert_audio_channels(wav, channels=2):
- """Convert audio to the given number of channels."""
- *shape, src_channels, length = wav.shape
- if src_channels == channels:
- pass
- elif channels == 1:
- # Case 1:
- # The caller asked 1-channel audio, but the stream have multiple
- # channels, downmix all channels.
- wav = wav.mean(dim=-2, keepdim=True)
- elif src_channels == 1:
- # Case 2:
- # The caller asked for multiple channels, but the input file have
- # one single channel, replicate the audio over all channels.
- wav = wav.expand(*shape, channels, length)
- elif src_channels >= channels:
- # Case 3:
- # The caller asked for multiple channels, and the input file have
- # more channels than requested. In that case return the first channels.
- wav = wav[..., :channels, :]
- else:
- # Case 4: What is a reasonable choice here?
- raise ValueError('The audio file has less channels than requested but is not mono.')
- return wav
-
-
-def convert_audio(wav, from_samplerate, to_samplerate, channels):
- wav = convert_audio_channels(wav, channels)
- return julius.resample_frac(wav, from_samplerate, to_samplerate)
diff --git a/spaces/EuroPython2022/pyro-vision/README.md b/spaces/EuroPython2022/pyro-vision/README.md
deleted file mode 100644
index c7e4d5bc1a24491d4d007000c316cb8fd77d14be..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/pyro-vision/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: PyroVision
-emoji: 🔥
-colorFrom: green
-colorTo: brown
-sdk: gradio
-sdk_version: 3.0.24
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/FYP-23-S1-21/Refineverse_Plugin/static/ProjectBreakdown.css b/spaces/FYP-23-S1-21/Refineverse_Plugin/static/ProjectBreakdown.css
deleted file mode 100644
index 25092d21a6ff0bf16f386ad715cd4b35b1542782..0000000000000000000000000000000000000000
--- a/spaces/FYP-23-S1-21/Refineverse_Plugin/static/ProjectBreakdown.css
+++ /dev/null
@@ -1,190 +0,0 @@
-body{
- background-image:url("../static/Images/Background.jpg");
- background-repeat: no-repeat;
-background-size: cover;
-}
-* {
- margin: 0;
- padding: 0;
- box-sizing: border-box;
-}
-
-header {
- display: flex;
- align-items: center;
- zoom: 130%;
- padding: 15px;
-}
-
-header img {
- /* Original width & height is 70px */
- width: 200px;
- height: 200px;
- margin-left: 300px;
-}
-
-header h1 {
- margin-left: 50px;
- font-size: 40px;
- color: rgb(26, 25, 25);
-}
-
-main {
- display: flex;
- justify-content: space-between;
- margin-top: 20px;
- margin-bottom: 20px;
-}
-
-.user-story {
- flex-basis: 30%;
- margin-left: 50px;
-}
-
-.user-story h2 {
- margin-bottom: 10px;
-}
-
-textarea {
- width: 900px;
- height: 400px;
- padding: 10px;
- border: 1px solid #ccc;
- border-radius: 5px;
- resize: none !important;
- margin-bottom: 10px;
-
-}
-
-table {
- flex-basis: 60%;
- margin-right: 20px;
- border: 1px solid #ccc;
- border-radius: 5px;
- overflow-y: scroll; /* To make the table scrollable */
- height: 200px; /* To match text area size */
- display: block;
- border-collapse: separate; /* Added to separate the border between the headers and the data cells */
- border-spacing: 0; /* Added to remove the extra space between the border */
-}
-
-#breakdown-table {
- width: 900px;
- height: 400px;
- margin-left: 20px;
-}
-
-#breakdown-table th,
-#breakdown-table td {
- border: 1px solid #ddd;
- padding: 8px;
- text-align: left;
-}
-
-#breakdown-table th:first-child {
- border-left: none;
-}
-
-#breakdown-table th:last-child {
- border-right: none;
-}
-
-#breakdown-table th:not(:first-child) {
- border-left: none;
- border-right: none;
-}
-
-#breakdown-table th div {
- border-bottom: 1px solid #ddd;
- padding: 8px;
-}
-
-#breakdown-table td div {
- padding: 8px;
-}
-
-#breakdown-table thead th {
- background-color: #f2f2f2;
-}
-
-#breakdown-table tbody tr:nth-child(even) {
- background-color: #f2f2f2;
-}
-
-#breakdown-table tbody tr:hover {
- background-color: #ddd;
-}
-
-
-#clear-btn {
- background-color: #d3d5d6;
- color: rgb(32, 31, 31);
- border: 2px;
- border-radius: 5px;
- padding: 10px;
- cursor: pointer;
-}
-
-#breakdown-btn {
- background-color: #2f3030;
- color: white;
- border: none;
- border-radius: 5px;
- padding: 10px;
- cursor: pointer;
- /* Added these 2 lines to make button appear at btm-right of user story contents.
- Not sure if this is the correct way to do it, but it looks alright for me? */
- position: absolute;
- left: 1750px;
-}
-
-.user-story-list {
- flex-basis: 60%;
- margin-right: 20px;
-}
-
-.user-story-list h2 {
- margin-bottom: 10px;
-}
-
-.scrollable-box {
- height: 200px;
- overflow-y: auto;
- border: 1px solid #ccc;
- border-radius: 5px;
- resize: none;
-}
-
-#user-story-ul {
- list-style: none;
- padding: 10px;
-}
-
-.back-Btn-Container {
- display: flex;
- justify-content: end;
- align-items: end;
- padding: 0 20px;
- margin-top: 20px;
-}
-
-.buttons-container {
- display: flex;
- justify-content: space-between;
- align-items: center;
- padding: 0 20px;
- margin-top: 20px;
-}
-
-.back-btn {
- background-color: #555;
- color: #fff;
- padding: 10px 20px;
- border: none;
- border-radius: 5px;
- font-size: 16px;
- cursor: pointer;
- margin-right: 150px;
- width: 110px;
- height: 40px;
-}
\ No newline at end of file
diff --git a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/commons.py b/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/Fatima990/text_generator1/app.py b/spaces/Fatima990/text_generator1/app.py
deleted file mode 100644
index f1d4beb0a8f3cee27903f527b6bf8daa485a75a0..0000000000000000000000000000000000000000
--- a/spaces/Fatima990/text_generator1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("huggingface/gpt2").launch()
\ No newline at end of file
diff --git a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/text/sanskrit.py b/spaces/FrankZxShen/vits-fast-finetuning-umamusume/text/sanskrit.py
deleted file mode 100644
index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000
--- a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/text/sanskrit.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import re
-from indic_transliteration import sanscript
-
-
-# List of (iast, ipa) pairs:
-_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('a', 'ə'),
- ('ā', 'aː'),
- ('ī', 'iː'),
- ('ū', 'uː'),
- ('ṛ', 'ɹ`'),
- ('ṝ', 'ɹ`ː'),
- ('ḷ', 'l`'),
- ('ḹ', 'l`ː'),
- ('e', 'eː'),
- ('o', 'oː'),
- ('k', 'k⁼'),
- ('k⁼h', 'kʰ'),
- ('g', 'g⁼'),
- ('g⁼h', 'gʰ'),
- ('ṅ', 'ŋ'),
- ('c', 'ʧ⁼'),
- ('ʧ⁼h', 'ʧʰ'),
- ('j', 'ʥ⁼'),
- ('ʥ⁼h', 'ʥʰ'),
- ('ñ', 'n^'),
- ('ṭ', 't`⁼'),
- ('t`⁼h', 't`ʰ'),
- ('ḍ', 'd`⁼'),
- ('d`⁼h', 'd`ʰ'),
- ('ṇ', 'n`'),
- ('t', 't⁼'),
- ('t⁼h', 'tʰ'),
- ('d', 'd⁼'),
- ('d⁼h', 'dʰ'),
- ('p', 'p⁼'),
- ('p⁼h', 'pʰ'),
- ('b', 'b⁼'),
- ('b⁼h', 'bʰ'),
- ('y', 'j'),
- ('ś', 'ʃ'),
- ('ṣ', 's`'),
- ('r', 'ɾ'),
- ('l̤', 'l`'),
- ('h', 'ɦ'),
- ("'", ''),
- ('~', '^'),
- ('ṃ', '^')
-]]
-
-
-def devanagari_to_ipa(text):
- text = text.replace('ॐ', 'ओम्')
- text = re.sub(r'\s*।\s*$', '.', text)
- text = re.sub(r'\s*।\s*', ', ', text)
- text = re.sub(r'\s*॥', '.', text)
- text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST)
- for regex, replacement in _iast_to_ipa:
- text = re.sub(regex, replacement, text)
- text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0)
- [:-1]+'h'+x.group(1)+'*', text)
- return text
diff --git a/spaces/GeorgeOrville/bingo/src/components/chat-history.tsx b/spaces/GeorgeOrville/bingo/src/components/chat-history.tsx
deleted file mode 100644
index feb81de66562edda8f40d3c0cc717202c92b6509..0000000000000000000000000000000000000000
--- a/spaces/GeorgeOrville/bingo/src/components/chat-history.tsx
+++ /dev/null
@@ -1,48 +0,0 @@
-import { IconEdit, IconTrash, IconMore, IconDownload } from "./ui/icons"
-
-export function ChatHistory() {
- return (
-
-
- 历史记录
-
-
-
-
-
-
-
-
-
- 无标题的聊天
-
- 上午1:42
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- )
-}
diff --git a/spaces/Gigabot/ostris-ikea-instructions-lora-sdxl/app.py b/spaces/Gigabot/ostris-ikea-instructions-lora-sdxl/app.py
deleted file mode 100644
index 1d6c504f95564cc6ee4e570f16198f96378d0a09..0000000000000000000000000000000000000000
--- a/spaces/Gigabot/ostris-ikea-instructions-lora-sdxl/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/ostris/ikea-instructions-lora-sdxl").launch()
\ No newline at end of file
diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/CONTRIBUTING.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/CONTRIBUTING.md
deleted file mode 100644
index 75990c2ce7545b72fb6ebad8295ca4895f437205..0000000000000000000000000000000000000000
--- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/CONTRIBUTING.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Contributing to Real-ESRGAN
-
-:art: Real-ESRGAN needs your contributions. Any contributions are welcome, such as new features/models/typo fixes/suggestions/maintenance, *etc*. See [CONTRIBUTING.md](docs/CONTRIBUTING.md). All contributors are list [here](README.md#hugs-acknowledgement).
-
-We like open-source and want to develop practical algorithms for general image restoration. However, individual strength is limited. So, any kinds of contributions are welcome, such as:
-
-- New features
-- New models (your fine-tuned models)
-- Bug fixes
-- Typo fixes
-- Suggestions
-- Maintenance
-- Documents
-- *etc*
-
-## Workflow
-
-1. Fork and pull the latest Real-ESRGAN repository
-1. Checkout a new branch (do not use master branch for PRs)
-1. Commit your changes
-1. Create a PR
-
-**Note**:
-
-1. Please check the code style and linting
- 1. The style configuration is specified in [setup.cfg](setup.cfg)
- 1. If you use VSCode, the settings are configured in [.vscode/settings.json](.vscode/settings.json)
-1. Strongly recommend using `pre-commit hook`. It will check your code style and linting before your commit.
- 1. In the root path of project folder, run `pre-commit install`
- 1. The pre-commit configuration is listed in [.pre-commit-config.yaml](.pre-commit-config.yaml)
-1. Better to [open a discussion](https://github.com/xinntao/Real-ESRGAN/discussions) before large changes.
- 1. Welcome to discuss :sunglasses:. I will try my best to join the discussion.
-
-## TODO List
-
-:zero: The most straightforward way of improving model performance is to fine-tune on some specific datasets.
-
-Here are some TODOs:
-
-- [ ] optimize for human faces
-- [ ] optimize for texts
-- [ ] support controllable restoration strength
-
-:one: There are also [several issues](https://github.com/xinntao/Real-ESRGAN/issues) that require helpers to improve. If you can help, please let me know :smile:
diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/Training_CN.md b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/Training_CN.md
deleted file mode 100644
index dabc3c5d97e134a2d551157c2dd03a629ec661bc..0000000000000000000000000000000000000000
--- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/Training_CN.md
+++ /dev/null
@@ -1,271 +0,0 @@
-# :computer: 如何训练/微调 Real-ESRGAN
-
-- [训练 Real-ESRGAN](#训练-real-esrgan)
- - [概述](#概述)
- - [准备数据集](#准备数据集)
- - [训练 Real-ESRNet 模型](#训练-real-esrnet-模型)
- - [训练 Real-ESRGAN 模型](#训练-real-esrgan-模型)
-- [用自己的数据集微调 Real-ESRGAN](#用自己的数据集微调-real-esrgan)
- - [动态生成降级图像](#动态生成降级图像)
- - [使用已配对的数据](#使用已配对的数据)
-
-[English](Training.md) **|** [简体中文](Training_CN.md)
-
-## 训练 Real-ESRGAN
-
-### 概述
-
-训练分为两个步骤。除了 loss 函数外,这两个步骤拥有相同数据合成以及训练的一条龙流程。具体点说:
-
-1. 首先使用 L1 loss 训练 Real-ESRNet 模型,其中 L1 loss 来自预先训练的 ESRGAN 模型。
-
-2. 然后我们将 Real-ESRNet 模型作为生成器初始化,结合L1 loss、感知 loss、GAN loss 三者的参数对 Real-ESRGAN 进行训练。
-
-### 准备数据集
-
-我们使用 DF2K ( DIV2K 和 Flickr2K ) + OST 数据集进行训练。只需要HR图像!
-下面是网站链接:
-1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip
-2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar
-3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip
-
-以下是数据的准备步骤。
-
-#### 第1步:【可选】生成多尺寸图片
-
-针对 DF2K 数据集,我们使用多尺寸缩放策略,*换言之*,我们对 HR 图像进行下采样,就能获得多尺寸的标准参考(Ground-Truth)图像。
-您可以使用这个 [scripts/generate_multiscale_DF2K.py](scripts/generate_multiscale_DF2K.py) 脚本快速生成多尺寸的图像。
-注意:如果您只想简单试试,那么可以跳过此步骤。
-
-```bash
-python scripts/generate_multiscale_DF2K.py --input datasets/DF2K/DF2K_HR --output datasets/DF2K/DF2K_multiscale
-```
-
-#### 第2步:【可选】裁切为子图像
-
-我们可以将 DF2K 图像裁切为子图像,以加快 IO 和处理速度。
-如果你的 IO 够好或储存空间有限,那么此步骤是可选的。
-
-您可以使用脚本 [scripts/extract_subimages.py](scripts/extract_subimages.py)。这是使用示例:
-
-```bash
- python scripts/extract_subimages.py --input datasets/DF2K/DF2K_multiscale --output datasets/DF2K/DF2K_multiscale_sub --crop_size 400 --step 200
-```
-
-#### 第3步:准备元信息 txt
-
-您需要准备一个包含图像路径的 txt 文件。下面是 `meta_info_DF2Kmultiscale+OST_sub.txt` 中的部分展示(由于各个用户可能有截然不同的子图像划分,这个文件不适合你的需求,你得准备自己的 txt 文件):
-
-```txt
-DF2K_HR_sub/000001_s001.png
-DF2K_HR_sub/000001_s002.png
-DF2K_HR_sub/000001_s003.png
-...
-```
-
-你可以使用该脚本 [scripts/generate_meta_info.py](scripts/generate_meta_info.py) 生成包含图像路径的 txt 文件。
-你还可以合并多个文件夹的图像路径到一个元信息(meta_info)txt。这是使用示例:
-
-```bash
- python scripts/generate_meta_info.py --input datasets/DF2K/DF2K_HR, datasets/DF2K/DF2K_multiscale --root datasets/DF2K, datasets/DF2K --meta_info datasets/DF2K/meta_info/meta_info_DF2Kmultiscale.txt
-```
-
-### 训练 Real-ESRNet 模型
-
-1. 下载预先训练的模型 [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth),放到 `experiments/pretrained_models`目录下。
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models
- ```
-2. 相应地修改选项文件 `options/train_realesrnet_x4plus.yml` 中的内容:
- ```yml
- train:
- name: DF2K+OST
- type: RealESRGANDataset
- dataroot_gt: datasets/DF2K # 修改为你的数据集文件夹根目录
- meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # 修改为你自己生成的元信息txt
- io_backend:
- type: disk
- ```
-3. 如果你想在训练过程中执行验证,就取消注释这些内容并进行相应的修改:
- ```yml
- # 取消注释这些以进行验证
- # val:
- # name: validation
- # type: PairedImageDataset
- # dataroot_gt: path_to_gt
- # dataroot_lq: path_to_lq
- # io_backend:
- # type: disk
-
- ...
-
- # 取消注释这些以进行验证
- # 验证设置
- # val:
- # val_freq: !!float 5e3
- # save_img: True
-
- # metrics:
- # psnr: # 指标名称,可以是任意的
- # type: calculate_psnr
- # crop_border: 4
- # test_y_channel: false
- ```
-4. 正式训练之前,你可以用 `--debug` 模式检查是否正常运行。我们用了4个GPU进行训练:
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug
- ```
-
- 用 **1个GPU** 训练的 debug 模式示例:
- ```bash
- python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --debug
- ```
-5. 正式训练开始。我们用了4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume
- ```
-
- 用 **1个GPU** 训练:
- ```bash
- python realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --auto_resume
- ```
-
-### 训练 Real-ESRGAN 模型
-
-1. 训练 Real-ESRNet 模型后,您得到了这个 `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth` 文件。如果需要指定预训练路径到其他文件,请修改选项文件 `train_realesrgan_x4plus.yml` 中 `pretrain_network_g` 的值。
-1. 修改选项文件 `train_realesrgan_x4plus.yml` 的内容。大多数修改与上节提到的类似。
-1. 正式训练之前,你可以以 `--debug` 模式检查是否正常运行。我们使用了4个GPU进行训练:
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug
- ```
-
- 用 **1个GPU** 训练的 debug 模式示例:
- ```bash
- python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --debug
- ```
-1. 正式训练开始。我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。
- ```bash
- CUDA_VISIBLE_DEVICES=0,1,2,3 \
- python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume
- ```
-
- 用 **1个GPU** 训练:
- ```bash
- python realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --auto_resume
- ```
-
-## 用自己的数据集微调 Real-ESRGAN
-
-你可以用自己的数据集微调 Real-ESRGAN。一般地,微调(Fine-Tune)程序可以分为两种类型:
-
-1. [动态生成降级图像](#动态生成降级图像)
-2. [使用**已配对**的数据](#使用已配对的数据)
-
-### 动态生成降级图像
-
-只需要高分辨率图像。在训练过程中,使用 Real-ESRGAN 描述的降级模型生成低质量图像。
-
-**1. 准备数据集**
-
-完整信息请参见[本节](#准备数据集)。
-
-**2. 下载预训练模型**
-
-下载预先训练的模型到 `experiments/pretrained_models` 目录下。
-
-- *RealESRGAN_x4plus.pth*:
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
- ```
-
-- *RealESRGAN_x4plus_netD.pth*:
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models
- ```
-
-**3. 微调**
-
-修改选项文件 [options/finetune_realesrgan_x4plus.yml](options/finetune_realesrgan_x4plus.yml) ,特别是 `datasets` 部分:
-
-```yml
-train:
- name: DF2K+OST
- type: RealESRGANDataset
- dataroot_gt: datasets/DF2K # 修改为你的数据集文件夹根目录
- meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # 修改为你自己生成的元信息txt
- io_backend:
- type: disk
-```
-
-我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。
-
-```bash
-CUDA_VISIBLE_DEVICES=0,1,2,3 \
-python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --launcher pytorch --auto_resume
-```
-
-用 **1个GPU** 训练:
-```bash
-python realesrgan/train.py -opt options/finetune_realesrgan_x4plus.yml --auto_resume
-```
-
-### 使用已配对的数据
-
-你还可以用自己已经配对的数据微调 RealESRGAN。这个过程更类似于微调 ESRGAN。
-
-**1. 准备数据集**
-
-假设你已经有两个文件夹(folder):
-
-- **gt folder**(标准参考,高分辨率图像):*datasets/DF2K/DIV2K_train_HR_sub*
-- **lq folder**(低质量,低分辨率图像):*datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub*
-
-然后,您可以使用脚本 [scripts/generate_meta_info_pairdata.py](scripts/generate_meta_info_pairdata.py) 生成元信息(meta_info)txt 文件。
-
-```bash
-python scripts/generate_meta_info_pairdata.py --input datasets/DF2K/DIV2K_train_HR_sub datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub --meta_info datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt
-```
-
-**2. 下载预训练模型**
-
-下载预先训练的模型到 `experiments/pretrained_models` 目录下。
-
-- *RealESRGAN_x4plus.pth*:
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
- ```
-
-- *RealESRGAN_x4plus_netD.pth*:
- ```bash
- wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth -P experiments/pretrained_models
- ```
-
-**3. 微调**
-
-修改选项文件 [options/finetune_realesrgan_x4plus_pairdata.yml](options/finetune_realesrgan_x4plus_pairdata.yml) ,特别是 `datasets` 部分:
-
-```yml
-train:
- name: DIV2K
- type: RealESRGANPairedDataset
- dataroot_gt: datasets/DF2K # 修改为你的 gt folder 文件夹根目录
- dataroot_lq: datasets/DF2K # 修改为你的 lq folder 文件夹根目录
- meta_info: datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt # 修改为你自己生成的元信息txt
- io_backend:
- type: disk
-```
-
-我们使用4个GPU进行训练。还可以使用参数 `--auto_resume` 在必要时自动恢复训练。
-
-```bash
-CUDA_VISIBLE_DEVICES=0,1,2,3 \
-python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --launcher pytorch --auto_resume
-```
-
-用 **1个GPU** 训练:
-```bash
-python realesrgan/train.py -opt options/finetune_realesrgan_x4plus_pairdata.yml --auto_resume
-```
diff --git a/spaces/Goutam982/RVC_V2_voice_clone/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/Goutam982/RVC_V2_voice_clone/lib/infer_pack/modules/F0Predictor/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/audiogen.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/audiogen.py
deleted file mode 100644
index 6adefb97401c10422c9711d222c0857f5593dceb..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/audiogen.py
+++ /dev/null
@@ -1,276 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Main model for using AudioGen. This will combine all the required components
-and provide easy access to the generation API.
-"""
-
-import typing as tp
-
-import torch
-
-from .encodec import CompressionModel
-from .lm import LMModel
-from .builders import get_debug_compression_model, get_debug_lm_model
-from .loaders import load_compression_model, load_lm_model
-from ..data.audio_utils import convert_audio
-from ..modules.conditioners import ConditioningAttributes
-from ..utils.autocast import TorchAutocast
-
-
-class AudioGen:
- """AudioGen main model with convenient generation API.
-
- Args:
- name (str): name of the model.
- compression_model (CompressionModel): Compression model
- used to map audio to invertible discrete representations.
- lm (LMModel): Language model over discrete representations.
- max_duration (float, optional): maximum duration the model can produce,
- otherwise, inferred from the training params.
- """
- def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel,
- max_duration: tp.Optional[float] = None):
- self.name = name
- self.compression_model = compression_model
- self.lm = lm
- if max_duration is None:
- if hasattr(lm, 'cfg'):
- max_duration = lm.cfg.dataset.segment_duration # type: ignore
- else:
- raise ValueError("You must provide max_duration when building directly AudioGen")
- assert max_duration is not None
- self.max_duration: float = max_duration
- self.device = next(iter(lm.parameters())).device
- self.generation_params: dict = {}
- self.set_generation_params(duration=5) # 5 seconds by default
- self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None
- if self.device.type == 'cpu':
- self.autocast = TorchAutocast(enabled=False)
- else:
- self.autocast = TorchAutocast(
- enabled=True, device_type=self.device.type, dtype=torch.float16)
-
- @property
- def frame_rate(self) -> float:
- """Roughly the number of AR steps per seconds."""
- return self.compression_model.frame_rate
-
- @property
- def sample_rate(self) -> int:
- """Sample rate of the generated audio."""
- return self.compression_model.sample_rate
-
- @property
- def audio_channels(self) -> int:
- """Audio channels of the generated audio."""
- return self.compression_model.channels
-
- @staticmethod
- def get_pretrained(name: str = 'facebook/audiogen-medium', device=None):
- """Return pretrained model, we provide a single model for now:
- - facebook/audiogen-medium (1.5B), text to sound,
- # see: https://huggingface.co/facebook/audiogen-medium
- """
- if device is None:
- if torch.cuda.device_count():
- device = 'cuda'
- else:
- device = 'cpu'
-
- if name == 'debug':
- # used only for unit tests
- compression_model = get_debug_compression_model(device, sample_rate=16000)
- lm = get_debug_lm_model(device)
- return AudioGen(name, compression_model, lm, max_duration=10)
-
- compression_model = load_compression_model(name, device=device)
- lm = load_lm_model(name, device=device)
- assert 'self_wav' not in lm.condition_provider.conditioners, \
- "AudioGen do not support waveform conditioning for now"
- return AudioGen(name, compression_model, lm)
-
- def set_generation_params(self, use_sampling: bool = True, top_k: int = 250,
- top_p: float = 0.0, temperature: float = 1.0,
- duration: float = 10.0, cfg_coef: float = 3.0,
- two_step_cfg: bool = False, extend_stride: float = 2):
- """Set the generation parameters for AudioGen.
-
- Args:
- use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True.
- top_k (int, optional): top_k used for sampling. Defaults to 250.
- top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0.
- temperature (float, optional): Softmax temperature parameter. Defaults to 1.0.
- duration (float, optional): Duration of the generated waveform. Defaults to 10.0.
- cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0.
- two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance,
- instead of batching together the two. This has some impact on how things
- are padded but seems to have little impact in practice.
- extend_stride: when doing extended generation (i.e. more than 10 seconds), by how much
- should we extend the audio each time. Larger values will mean less context is
- preserved, and shorter value will require extra computations.
- """
- assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration."
- self.extend_stride = extend_stride
- self.duration = duration
- self.generation_params = {
- 'use_sampling': use_sampling,
- 'temp': temperature,
- 'top_k': top_k,
- 'top_p': top_p,
- 'cfg_coef': cfg_coef,
- 'two_step_cfg': two_step_cfg,
- }
-
- def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None):
- """Override the default progress callback."""
- self._progress_callback = progress_callback
-
- def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor:
- """Generate samples conditioned on text.
-
- Args:
- descriptions (list of str): A list of strings used as text conditioning.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None)
- assert prompt_tokens is None
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int,
- descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None,
- progress: bool = False) -> torch.Tensor:
- """Generate samples conditioned on audio prompts.
-
- Args:
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- Prompt should be [B, C, T], or [C, T] if only one sample is generated.
- prompt_sample_rate (int): Sampling rate of the given audio waveforms.
- descriptions (list of str, optional): A list of strings used as text conditioning. Defaults to None.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- """
- if prompt.dim() == 2:
- prompt = prompt[None]
- if prompt.dim() != 3:
- raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).")
- prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels)
- if descriptions is None:
- descriptions = [None] * len(prompt)
- attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt)
- assert prompt_tokens is not None
- return self._generate_tokens(attributes, prompt_tokens, progress)
-
- @torch.no_grad()
- def _prepare_tokens_and_attributes(
- self,
- descriptions: tp.Sequence[tp.Optional[str]],
- prompt: tp.Optional[torch.Tensor],
- ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]:
- """Prepare model inputs.
-
- Args:
- descriptions (list of str): A list of strings used as text conditioning.
- prompt (torch.Tensor): A batch of waveforms used for continuation.
- """
- attributes = [
- ConditioningAttributes(text={'description': description})
- for description in descriptions]
-
- if prompt is not None:
- if descriptions is not None:
- assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match"
- prompt = prompt.to(self.device)
- prompt_tokens, scale = self.compression_model.encode(prompt)
- assert scale is None
- else:
- prompt_tokens = None
- return attributes, prompt_tokens
-
- def _generate_tokens(self, attributes: tp.List[ConditioningAttributes],
- prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor:
- """Generate discrete audio tokens given audio prompt and/or conditions.
-
- Args:
- attributes (list of ConditioningAttributes): Conditions used for generation (here text).
- prompt_tokens (torch.Tensor, optional): Audio prompt used for continuation.
- progress (bool, optional): Flag to display progress of the generation process. Defaults to False.
- Returns:
- torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params.
- """
- i = 0
- prompt_list = attributes[0].text['description']
- total_gen_len = int(self.duration * self.frame_rate)
- max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate)
- current_gen_offset: int = 0
-
- def _progress_callback(generated_tokens: int, tokens_to_generate: int):
- generated_tokens += current_gen_offset
- if self._progress_callback is not None:
- # Note that total_gen_len might be quite wrong depending on the
- # codebook pattern used, but with delay it is almost accurate.
- self._progress_callback(generated_tokens, total_gen_len)
- else:
- print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r')
-
- if prompt_tokens is not None:
- assert max_prompt_len >= prompt_tokens.shape[-1], \
- "Prompt is longer than audio to generate"
-
- callback = None
- if progress:
- callback = _progress_callback
-
- if self.duration <= self.max_duration:
- # generate by sampling from LM, simple case.
- with self.autocast:
- attributes[0].text['description'] = prompt_list[0]
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=total_gen_len, **self.generation_params)
-
- else:
- all_tokens = []
- if prompt_tokens is None:
- prompt_length = 0
- else:
- all_tokens.append(prompt_tokens)
- prompt_length = prompt_tokens.shape[-1]
-
- stride_tokens = int(self.frame_rate * self.extend_stride)
-
- while current_gen_offset + prompt_length < total_gen_len:
- time_offset = current_gen_offset / self.frame_rate
- chunk_duration = min(self.duration - time_offset, self.max_duration)
- max_gen_len = int(chunk_duration * self.frame_rate)
- with self.autocast:
- if i >= len(prompt_list):
- i = len(prompt_list) - 1
- attributes[0].text['description'] = prompt_list[i]
- gen_tokens = self.lm.generate(
- prompt_tokens, attributes,
- callback=callback, max_gen_len=max_gen_len, **self.generation_params)
- i = i + 1
- if prompt_tokens is None:
- all_tokens.append(gen_tokens)
- else:
- all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:])
- prompt_tokens = gen_tokens[:, :, stride_tokens:]
- prompt_length = prompt_tokens.shape[-1]
- current_gen_offset += stride_tokens
-
- gen_tokens = torch.cat(all_tokens, dim=-1)
-
- # generate audio
- assert gen_tokens.dim() == 3
- with torch.no_grad():
- gen_audio = self.compression_model.decode(gen_tokens, None)
- return gen_audio
-
- def to(self, device: str):
- self.compression_model.to(device)
- self.lm.to(device)
- return self
\ No newline at end of file
diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/builders.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/builders.py
deleted file mode 100644
index 038bf99c3d0fbbb86005683d5a2a1b4edcac4298..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/builders.py
+++ /dev/null
@@ -1,252 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-All the functions to build the relevant models and modules
-from the Hydra config.
-"""
-
-import typing as tp
-
-import audiocraft
-import omegaconf
-import torch
-
-from .encodec import CompressionModel, EncodecModel
-from .lm import LMModel
-from ..modules.codebooks_patterns import (
- CodebooksPatternProvider,
- DelayedPatternProvider,
- MusicLMPattern,
- ParallelPatternProvider,
- UnrolledPatternProvider,
- VALLEPattern,
-)
-from ..modules.conditioners import (
- BaseConditioner,
- ChromaStemConditioner,
- CLAPEmbeddingConditioner,
- ConditionFuser,
- ConditioningProvider,
- LUTConditioner,
- T5Conditioner,
-)
-from .unet import DiffusionUnet
-from .. import quantization as qt
-from ..utils.utils import dict_from_config
-from ..modules.diffusion_schedule import MultiBandProcessor, SampleProcessor
-
-
-def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer:
- klass = {
- 'no_quant': qt.DummyQuantizer,
- 'rvq': qt.ResidualVectorQuantizer
- }[quantizer]
- kwargs = dict_from_config(getattr(cfg, quantizer))
- if quantizer != 'no_quant':
- kwargs['dimension'] = dimension
- return klass(**kwargs)
-
-
-def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig):
- if encoder_name == 'seanet':
- kwargs = dict_from_config(getattr(cfg, 'seanet'))
- encoder_override_kwargs = kwargs.pop('encoder')
- decoder_override_kwargs = kwargs.pop('decoder')
- encoder_kwargs = {**kwargs, **encoder_override_kwargs}
- decoder_kwargs = {**kwargs, **decoder_override_kwargs}
- encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs)
- decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs)
- return encoder, decoder
- else:
- raise KeyError(f"Unexpected compression model {cfg.compression_model}")
-
-
-def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel:
- """Instantiate a compression model."""
- if cfg.compression_model == 'encodec':
- kwargs = dict_from_config(getattr(cfg, 'encodec'))
- encoder_name = kwargs.pop('autoencoder')
- quantizer_name = kwargs.pop('quantizer')
- encoder, decoder = get_encodec_autoencoder(encoder_name, cfg)
- quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension)
- frame_rate = kwargs['sample_rate'] // encoder.hop_length
- renormalize = kwargs.pop('renormalize', False)
- # deprecated params
- kwargs.pop('renorm', None)
- return EncodecModel(encoder, decoder, quantizer,
- frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device)
- else:
- raise KeyError(f"Unexpected compression model {cfg.compression_model}")
-
-
-def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel:
- """Instantiate a transformer LM."""
- if cfg.lm_model == 'transformer_lm':
- kwargs = dict_from_config(getattr(cfg, 'transformer_lm'))
- n_q = kwargs['n_q']
- q_modeling = kwargs.pop('q_modeling', None)
- codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern')
- attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout'))
- cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance'))
- cfg_prob, cfg_coef = cls_free_guidance['training_dropout'], cls_free_guidance['inference_coef']
- fuser = get_condition_fuser(cfg)
- condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device)
- if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programmatically
- kwargs['cross_attention'] = True
- if codebooks_pattern_cfg.modeling is None:
- assert q_modeling is not None, \
- "LM model should either have a codebook pattern defined or transformer_lm.q_modeling"
- codebooks_pattern_cfg = omegaconf.OmegaConf.create(
- {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}}
- )
- pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg)
- return LMModel(
- pattern_provider=pattern_provider,
- condition_provider=condition_provider,
- fuser=fuser,
- cfg_dropout=cfg_prob,
- cfg_coef=cfg_coef,
- attribute_dropout=attribute_dropout,
- dtype=getattr(torch, cfg.dtype),
- device=cfg.device,
- **kwargs
- ).to(cfg.device)
- else:
- raise KeyError(f"Unexpected LM model {cfg.lm_model}")
-
-
-def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider:
- """Instantiate a conditioning model."""
- device = cfg.device
- duration = cfg.dataset.segment_duration
- cfg = getattr(cfg, 'conditioners')
- dict_cfg = {} if cfg is None else dict_from_config(cfg)
- conditioners: tp.Dict[str, BaseConditioner] = {}
- condition_provider_args = dict_cfg.pop('args', {})
- condition_provider_args.pop('merge_text_conditions_p', None)
- condition_provider_args.pop('drop_desc_p', None)
-
- for cond, cond_cfg in dict_cfg.items():
- model_type = cond_cfg['model']
- model_args = cond_cfg[model_type]
- if model_type == 't5':
- conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args)
- elif model_type == 'lut':
- conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args)
- elif model_type == 'chroma_stem':
- conditioners[str(cond)] = ChromaStemConditioner(
- output_dim=output_dim,
- duration=duration,
- device=device,
- **model_args
- )
- elif model_type == 'clap':
- conditioners[str(cond)] = CLAPEmbeddingConditioner(
- output_dim=output_dim,
- device=device,
- **model_args
- )
- else:
- raise ValueError(f"Unrecognized conditioning model: {model_type}")
- conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args)
- return conditioner
-
-
-def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser:
- """Instantiate a condition fuser object."""
- fuser_cfg = getattr(cfg, 'fuser')
- fuser_methods = ['sum', 'cross', 'prepend', 'input_interpolate']
- fuse2cond = {k: fuser_cfg[k] for k in fuser_methods}
- kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods}
- fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs)
- return fuser
-
-
-def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider:
- """Instantiate a codebooks pattern provider object."""
- pattern_providers = {
- 'parallel': ParallelPatternProvider,
- 'delay': DelayedPatternProvider,
- 'unroll': UnrolledPatternProvider,
- 'valle': VALLEPattern,
- 'musiclm': MusicLMPattern,
- }
- name = cfg.modeling
- kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {}
- klass = pattern_providers[name]
- return klass(n_q, **kwargs)
-
-
-def get_debug_compression_model(device='cpu', sample_rate: int = 32000):
- """Instantiate a debug compression model to be used for unit tests."""
- assert sample_rate in [16000, 32000], "unsupported sample rate for debug compression model"
- model_ratios = {
- 16000: [10, 8, 8], # 25 Hz at 16kHz
- 32000: [10, 8, 16] # 25 Hz at 32kHz
- }
- ratios: tp.List[int] = model_ratios[sample_rate]
- frame_rate = 25
- seanet_kwargs: dict = {
- 'n_filters': 4,
- 'n_residual_layers': 1,
- 'dimension': 32,
- 'ratios': ratios,
- }
- print(seanet_kwargs)
- encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs)
- decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs)
- quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4)
- init_x = torch.randn(8, 32, 128)
- quantizer(init_x, 1) # initialize kmeans etc.
- compression_model = EncodecModel(
- encoder, decoder, quantizer,
- frame_rate=frame_rate, sample_rate=sample_rate, channels=1).to(device)
- return compression_model.eval()
-
-
-def get_diffusion_model(cfg: omegaconf.DictConfig):
- # TODO Find a way to infer the channels from dset
- channels = cfg.channels
- num_steps = cfg.schedule.num_steps
- return DiffusionUnet(
- chin=channels, num_steps=num_steps, **cfg.diffusion_unet)
-
-
-def get_processor(cfg, sample_rate: int = 24000):
- sample_processor = SampleProcessor()
- if cfg.use:
- kw = dict(cfg)
- kw.pop('use')
- kw.pop('name')
- if cfg.name == "multi_band_processor":
- sample_processor = MultiBandProcessor(sample_rate=sample_rate, **kw)
- return sample_processor
-
-
-def get_debug_lm_model(device='cpu'):
- """Instantiate a debug LM to be used for unit tests."""
- pattern = DelayedPatternProvider(n_q=4)
- dim = 16
- providers = {
- 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"),
- }
- condition_provider = ConditioningProvider(providers)
- fuser = ConditionFuser(
- {'cross': ['description'], 'prepend': [],
- 'sum': [], 'input_interpolate': []})
- lm = LMModel(
- pattern, condition_provider, fuser,
- n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2,
- cross_attention=True, causal=True)
- return lm.to(device).eval()
-
-
-def get_wrapped_compression_model(
- compression_model: CompressionModel,
- cfg: omegaconf.DictConfig) -> CompressionModel:
- # more to come.
- return compression_model
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/seanet.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/seanet.py
deleted file mode 100644
index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/modules/seanet.py
+++ /dev/null
@@ -1,258 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-import numpy as np
-import torch.nn as nn
-
-from .conv import StreamableConv1d, StreamableConvTranspose1d
-from .lstm import StreamableLSTM
-
-
-class SEANetResnetBlock(nn.Module):
- """Residual block from SEANet model.
-
- Args:
- dim (int): Dimension of the input/output.
- kernel_sizes (list): List of kernel sizes for the convolutions.
- dilations (list): List of dilations for the convolutions.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection.
- """
- def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1],
- activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False,
- pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True):
- super().__init__()
- assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations'
- act = getattr(nn, activation)
- hidden = dim // compress
- block = []
- for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)):
- in_chs = dim if i == 0 else hidden
- out_chs = dim if i == len(kernel_sizes) - 1 else hidden
- block += [
- act(**activation_params),
- StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation,
- norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- self.block = nn.Sequential(*block)
- self.shortcut: nn.Module
- if true_skip:
- self.shortcut = nn.Identity()
- else:
- self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode)
-
- def forward(self, x):
- return self.shortcut(x) + self.block(x)
-
-
-class SEANetEncoder(nn.Module):
- """SEANet encoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of
- upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here
- that must match the decoder order. We use the decoder order as some models may only employ the decoder.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the encoder, it corresponds to the N first blocks.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0):
- super().__init__()
- self.channels = channels
- self.dimension = dimension
- self.n_filters = n_filters
- self.ratios = list(reversed(ratios))
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = 1
- model: tp.List[nn.Module] = [
- StreamableConv1d(channels, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Downsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- norm=block_norm, norm_params=norm_params,
- activation=activation, activation_params=activation_params,
- causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- # Add downsampling layers
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, mult * n_filters * 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, pad_mode=pad_mode),
- ]
- mult *= 2
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- model += [
- act(**activation_params),
- StreamableConv1d(mult * n_filters, dimension, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- self.model = nn.Sequential(*model)
-
- def forward(self, x):
- return self.model(x)
-
-
-class SEANetDecoder(nn.Module):
- """SEANet decoder.
-
- Args:
- channels (int): Audio channels.
- dimension (int): Intermediate representation dimension.
- n_filters (int): Base width for the model.
- n_residual_layers (int): nb of residual layers.
- ratios (Sequence[int]): kernel size and stride ratios.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- final_activation (str): Final activation function after all convolutions.
- final_activation_params (dict): Parameters to provide to the activation function.
- norm (str): Normalization method.
- norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution.
- kernel_size (int): Kernel size for the initial convolution.
- last_kernel_size (int): Kernel size for the initial convolution.
- residual_kernel_size (int): Kernel size for the residual layers.
- dilation_base (int): How much to increase the dilation with each layer.
- causal (bool): Whether to use fully causal convolution.
- pad_mode (str): Padding mode for the convolutions.
- true_skip (bool): Whether to use true skip connection or a simple.
- (streamable) convolution as the skip connection in the residual network blocks.
- compress (int): Reduced dimensionality in residual branches (from Demucs v3).
- lstm (int): Number of LSTM layers at the end of the encoder.
- disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm.
- For the decoder, it corresponds to the N last blocks.
- trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup.
- If equal to 1.0, it means that all the trimming is done at the right.
- """
- def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3,
- ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0},
- final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None,
- norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7,
- last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False,
- pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0,
- disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0):
- super().__init__()
- self.dimension = dimension
- self.channels = channels
- self.n_filters = n_filters
- self.ratios = ratios
- del ratios
- self.n_residual_layers = n_residual_layers
- self.hop_length = np.prod(self.ratios)
- self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks
- self.disable_norm_outer_blocks = disable_norm_outer_blocks
- assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \
- "Number of blocks for which to disable norm is invalid." \
- "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0."
-
- act = getattr(nn, activation)
- mult = int(2 ** len(self.ratios))
- model: tp.List[nn.Module] = [
- StreamableConv1d(dimension, mult * n_filters, kernel_size,
- norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
-
- if lstm:
- model += [StreamableLSTM(mult * n_filters, num_layers=lstm)]
-
- # Upsample to raw audio scale
- for i, ratio in enumerate(self.ratios):
- block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm
- # Add upsampling layers
- model += [
- act(**activation_params),
- StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2,
- kernel_size=ratio * 2, stride=ratio,
- norm=block_norm, norm_kwargs=norm_params,
- causal=causal, trim_right_ratio=trim_right_ratio),
- ]
- # Add residual layers
- for j in range(n_residual_layers):
- model += [
- SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1],
- dilations=[dilation_base ** j, 1],
- activation=activation, activation_params=activation_params,
- norm=block_norm, norm_params=norm_params, causal=causal,
- pad_mode=pad_mode, compress=compress, true_skip=true_skip)]
-
- mult //= 2
-
- # Add final layers
- model += [
- act(**activation_params),
- StreamableConv1d(n_filters, channels, last_kernel_size,
- norm='none' if self.disable_norm_outer_blocks >= 1 else norm,
- norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode)
- ]
- # Add optional final activation to decoder (eg. tanh)
- if final_activation is not None:
- final_act = getattr(nn, final_activation)
- final_activation_params = final_activation_params or {}
- model += [
- final_act(**final_activation_params)
- ]
- self.model = nn.Sequential(*model)
-
- def forward(self, z):
- y = self.model(z)
- return y
diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/ContentVec256L9_Onnx.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/ContentVec256L9_Onnx.py
deleted file mode 100644
index fae2b928252801795b038f51451b234e007f6f03..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/vencoder/ContentVec256L9_Onnx.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from vencoder.encoder import SpeechEncoder
-import onnxruntime
-import torch
-
-class ContentVec256L9_Onnx(SpeechEncoder):
- def __init__(self,vec_path = "pretrain/vec-256-layer-9.onnx",device=None):
- print("load model(s) from {}".format(vec_path))
- self.hidden_dim = 256
- if device is None:
- self.dev = torch.device("cpu")
- else:
- self.dev = torch.device(device)
- if device == 'cpu' or device == torch.device("cpu") or device is None:
- providers = ['CPUExecutionProvider']
- elif device == 'cuda' or device == torch.device("cuda"):
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
- self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
-
- def encoder(self, wav):
- feats = wav
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- feats = feats.unsqueeze(0).cpu().detach().numpy()
- onnx_input = {self.model.get_inputs()[0].name: feats}
- logits = self.model.run(None, onnx_input)
- return torch.tensor(logits[0]).transpose(1, 2).to(self.dev)
\ No newline at end of file
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/loss/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/latent_depth/latent_depth_src/loss/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py
deleted file mode 100644
index 062bb82f669f63a537b6ee8df4d42d292eb2575e..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py
+++ /dev/null
@@ -1,201 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import nltk
-from misc.bleu_utils import sentence_bleu
-import warnings
-
-
-def get_target_sequences(manifest, ground_truth, to_take=1000):
- import json
- import pathlib
-
- with open(ground_truth, 'r') as fin:
- original_continuations = json.loads(fin.read())
-
- sequence2length = [(k, v[0]) for k, v in original_continuations.items()]
- assert all(float(v) >= 6.0 for (_, v) in sequence2length) # 6 seconds
-
- sequence2length.sort(key=lambda x: x[1])
- to_take_sequences = set(v[0] for v in sequence2length[:to_take])
- to_take_ids = []
-
- with open(manifest, 'r') as f:
- f.readline()
-
- for i, line in enumerate(f.readlines()):
- seq_id = line.split()[0]
- seq_id = pathlib.Path(seq_id).name.split('__')[0]
-
- if seq_id in to_take_sequences:
- to_take_ids.append(i)
-
- print(f'Took {len(to_take_ids)} ids')
- return set(to_take_ids)
-
-
-def get_args():
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument('--asr-transcript', type=str,
- help='Path to the transcript file.')
-
- parser.add_argument('--manifest', required=True)
- parser.add_argument('--prompts-description', required=True)
-
- parser.add_argument('--cut-id', action='store_true',
- help='Whether cut the first token (typically a seq id)')
- parser.add_argument('--cut-tail', action='store_true',
- help='Whether cut the last token (typically a speaker id)')
- parser.add_argument('--debug', action='store_true')
-
- args = parser.parse_args()
-
- return args
-
-
-def get_self_bleu(utterances, averaging_mode, weights):
- self_bleu = []
-
- for i in range(len(utterances)):
- hypo = utterances[i]
- rest = utterances[:i] + utterances[i+1:]
-
- self_bleu.append(sentence_bleu(rest, hypo, weights,
- no_length_penalty=True, averaging_mode=averaging_mode))
-
- return self_bleu
-
-
-def get_self_bleu2_arithmetic(utterances):
- weights = (0.5, 0.5) # equal weight for unigrams and bigrams
- return get_self_bleu(utterances, averaging_mode='arithmetic', weights=weights)
-
-
-def get_self_bleu2_geometric(utterances):
- weights = (0.5, 0.5)
- return get_self_bleu(utterances, averaging_mode='geometric', weights=weights)
-
-
-def get_auto_bleu2_arithmetic(utterances):
- weights = (0.5, 0.5)
- return [auto_bleu(u, mean_mode='arithmetic', weights=weights) for u in utterances]
-
-
-def get_auto_bleu2_geometric(utterances):
- weights = (0.5, 0.5)
- return [auto_bleu(u, mean_mode='geometric', weights=weights) for u in utterances]
-
-
-def get_auto_bleu3_geometric(utterances):
- weights = (1./3, 1./3, 1./3)
- return [auto_bleu(u, mean_mode='geometric', weights=weights) for u in utterances]
-
-
-def get_auto_bleu3_arithmetic(utterances):
- weights = (1./3, 1./3, 1./3)
- return [auto_bleu(u, mean_mode='arithmetic', weights=weights) for u in utterances]
-
-
-def get_self_bleu3_arithmetic(utterances):
- weights = (1./3, 1./3, 1./3)
- return get_self_bleu(utterances, averaging_mode='arithmetic', weights=weights)
-
-
-def get_self_bleu3_geometric(utterances):
- weights = (1./3, 1./3, 1./3)
- return get_self_bleu(utterances, averaging_mode='geometric', weights=weights)
-
-
-def auto_bleu(sentence, weights, mean_mode='arithmetic'):
- if len(sentence) <= 1:
- return 0
-
- N = len(weights)
-
- bleu_n = np.zeros([N])
- for n in range(N):
- targ_ngrams = list(nltk.ngrams(sentence, n+1))
- for p in range(len(targ_ngrams)):
- left = sentence[:p]
- right = sentence[(p+n+1):]
- rest_ngrams = list(nltk.ngrams(left, n+1)) + \
- list(nltk.ngrams(right, n+1))
- # compute the nb of matching ngrams
- bleu_n[n] += targ_ngrams[p] in rest_ngrams
- bleu_n[n] /= len(targ_ngrams) # average them to get a proportion
-
- weights = np.array(weights)
- if mean_mode == 'arithmetic':
- return (bleu_n * weights).sum()
- elif mean_mode == 'geometric':
- return (bleu_n ** weights).prod()
- else:
- raise ValueError(f'Unknown agggregation mode {mean_mode}')
-
-
-def main():
- from multiprocessing import Pool
-
- args = get_args()
- target_ids = get_target_sequences(args.manifest, args.prompts_description)
-
- with open(args.asr_transcript, 'r') as fin:
- lines = fin.readlines()
-
- terms = [x.strip().split() for x in lines]
- filtered = []
- for term in terms:
- line_id = int(term[-1].split('-')[1][:-1])
- if line_id in target_ids:
- filtered.append(term)
- terms = filtered
-
- if args.cut_id:
- terms = [x[1:] for x in terms]
- if args.cut_tail:
- terms = [x[:-1] for x in terms]
-
- if args.debug:
- terms = terms[:10]
-
- tasks = [
- ('Self-BLEU2-arithmetic', get_self_bleu2_arithmetic),
- ('Self-BLEU2-geometric', get_self_bleu2_geometric),
- ('Auto-BLEU2-arithmetic', get_auto_bleu2_arithmetic),
- ('Auto-BLEU2-geometric', get_auto_bleu2_geometric),
-
- ('Self-BLEU3-arithmetic', get_self_bleu3_arithmetic),
- ('Self-BLEU3-geometric', get_self_bleu3_geometric),
- ('Auto-BLEU3-arithmetic', get_auto_bleu3_arithmetic),
- ('Auto-BLEU3-geometric', get_auto_bleu3_geometric),
- ]
-
- n_processes = min(16, len(tasks))
- with Pool(n_processes) as pool:
- metrics = pool.map(run_f, [(t[1], terms) for t in tasks])
-
- for (metric_name, _), metric in zip(tasks, metrics):
- metric, sem = np.mean(metric), np.std(metric) / np.sqrt(len(metric))
-
- metric, sem = [
- round(100 * x, 2) for x in [metric, sem]
- ]
-
- print(f'{metric_name} {metric} +- {sem}')
-
-
-def run_f(task_params):
- f, terms = task_params
- return f(terms)
-
-
-if __name__ == '__main__':
- # NLTK produces warnings
- warnings.filterwarnings("ignore")
-
- main()
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/numbers.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/numbers.py
deleted file mode 100644
index 0d5f7fa818a45ecf132627d240afac653e148070..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/numbers.py
+++ /dev/null
@@ -1,71 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-import inflect
-import re
-
-
-_inflect = inflect.engine()
-_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])')
-_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)')
-_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)')
-_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)')
-_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)')
-_number_re = re.compile(r'[0-9]+')
-
-
-def _remove_commas(m):
- return m.group(1).replace(',', '')
-
-
-def _expand_decimal_point(m):
- return m.group(1).replace('.', ' point ')
-
-
-def _expand_dollars(m):
- match = m.group(1)
- parts = match.split('.')
- if len(parts) > 2:
- return match + ' dollars' # Unexpected format
- dollars = int(parts[0]) if parts[0] else 0
- cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
- if dollars and cents:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit)
- elif dollars:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- return '%s %s' % (dollars, dollar_unit)
- elif cents:
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s' % (cents, cent_unit)
- else:
- return 'zero dollars'
-
-
-def _expand_ordinal(m):
- return _inflect.number_to_words(m.group(0))
-
-
-def _expand_number(m):
- num = int(m.group(0))
- if num > 1000 and num < 3000:
- if num == 2000:
- return 'two thousand'
- elif num > 2000 and num < 2010:
- return 'two thousand ' + _inflect.number_to_words(num % 100)
- elif num % 100 == 0:
- return _inflect.number_to_words(num // 100) + ' hundred'
- else:
- return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ')
- else:
- return _inflect.number_to_words(num, andword='')
-
-
-def normalize_numbers(text):
- text = re.sub(_comma_number_re, _remove_commas, text)
- text = re.sub(_pounds_re, r'\1 pounds', text)
- text = re.sub(_dollars_re, _expand_dollars, text)
- text = re.sub(_decimal_number_re, _expand_decimal_point, text)
- text = re.sub(_ordinal_re, _expand_ordinal, text)
- text = re.sub(_number_re, _expand_number, text)
- return text
diff --git a/spaces/Harsh502s/Anime-Recommender/Pages/Recommender App.py b/spaces/Harsh502s/Anime-Recommender/Pages/Recommender App.py
deleted file mode 100644
index 0582899de73e31c7bdbac4b67c73f15bd2dfdb33..0000000000000000000000000000000000000000
--- a/spaces/Harsh502s/Anime-Recommender/Pages/Recommender App.py
+++ /dev/null
@@ -1,532 +0,0 @@
-import streamlit as st
-import pandas as pd
-import pickle
-from ast import literal_eval
-import webbrowser
-
-
-# Importing the dataset
-@st.cache_data
-def load_data():
- try:
- anime_data = pd.read_csv(r"rec_data.csv")
- except:
- st.error("Dataset Not Found")
- return anime_data
-
-
-anime_data = load_data()
-
-
-def get_genres():
- genres = sorted(
- list(set([j for i in anime_data["genres"] for j in literal_eval(i)]))
- )
- genres.insert(0, "All Genres")
- genres.remove("NA")
- return genres
-
-
-# Uncomment this if you want to load the model
-@st.cache_resource
-def load_model():
- try:
- similarity = pickle.load(open(r"similarity.pkl", "rb"))
- except:
- st.error("Model Not Found")
- return similarity
-
-
-similarity = load_model()
-
-
-# Fetching the poster and url of the anime
-def fetch_anime_url(anime_id):
- url = anime_data[anime_data["anime_id"] == anime_id].anime_url.values[0]
- return url
-
-
-def fetch_poster(anime_id):
- poster = anime_data[anime_data["anime_id"] == anime_id].poster.values[0]
- return poster
-
-
-# Recommender System
-def recommend(anime, genre=None):
- if genre == None:
- index = (
- anime_data[anime_data["title"] == anime]
- .sort_values("score", ascending=False)
- .index[0]
- )
- elif genre != None:
- index = (
- anime_data[
- (anime_data["title"] == anime)
- | (anime_data["genres"].str.contains(genre))
- ]
- .sort_values("score", ascending=False)
- .index[0]
- )
- # index = anime_data[anime_data["title"] == anime].index[0]
- distances = sorted(
- list(enumerate(similarity[index])), reverse=True, key=lambda x: x[1]
- )
-
- recommended_anime_names = []
- recommended_anime_posters = []
- recommended_anime_urls = []
-
- for i in distances[1:9]:
- # fetch the anime poster
- anime_id = anime_data.iloc[i[0]].anime_id
- recommended_anime_posters.append(fetch_poster(anime_id))
- recommended_anime_names.append(anime_data.iloc[i[0]].title)
- recommended_anime_urls.append(fetch_anime_url(anime_id))
-
- return recommended_anime_names, recommended_anime_posters, recommended_anime_urls
-
-
-# Function to display the top 8 animes with the highest rating
-def top_animes():
- style_for_page = """
-
- """
- st.markdown(style_for_page, unsafe_allow_html=True)
-
- top8 = anime_data.sort_values("score", ascending=False).head(8)
-
- with st.container():
- col0, col1, col2, col3 = st.columns(4)
- with col0:
- st.button(
- label=f"{top8.iloc[0].title}",
- key=top8.iloc[0].title,
- on_click=lambda: webbrowser.open_new_tab(top8.iloc[0].anime_url),
- use_container_width=True,
- )
- st.image(top8.iloc[0].poster, use_column_width=True)
- with col1:
- st.button(
- label=f"{top8.iloc[1].title}",
- key=top8.iloc[1].title,
- on_click=lambda: webbrowser.open_new_tab(top8.iloc[1].anime_url),
- use_container_width=True,
- )
- st.image(top8.iloc[1].poster, use_column_width=True)
- with col2:
- st.button(
- label=f"{top8.iloc[2].title}",
- key=top8.iloc[2].title,
- on_click=lambda: webbrowser.open_new_tab(top8.iloc[2].anime_url),
- use_container_width=True,
- )
- st.image(top8.iloc[2].poster, use_column_width=True)
- with col3:
- st.button(
- label=f"{top8.iloc[3].title}",
- key=top8.iloc[3].title,
- on_click=lambda: webbrowser.open_new_tab(top8.iloc[3].anime_url),
- use_container_width=True,
- )
- st.image(top8.iloc[3].poster, use_column_width=True)
-
- st.divider()
-
- with st.container():
- col4, col5, col6, col7 = st.columns(4)
- with col4:
- st.button(
- label=f"{top8.iloc[4].title}",
- key=top8.iloc[4].title,
- on_click=lambda: webbrowser.open_new_tab(top8.iloc[4].anime_url),
- use_container_width=True,
- )
- st.image(top8.iloc[4].poster, use_column_width=True)
- with col5:
- st.button(
- label=f"{top8.iloc[5].title}",
- key=top8.iloc[5].title,
- on_click=lambda: webbrowser.open_new_tab(top8.iloc[5].anime_url),
- use_container_width=True,
- )
- st.image(top8.iloc[5].poster, use_column_width=True)
- with col6:
- st.button(
- label=f"{top8.iloc[6].title}",
- key=top8.iloc[6].title,
- on_click=lambda: webbrowser.open_new_tab(top8.iloc[6].anime_url),
- use_container_width=True,
- )
- st.image(top8.iloc[6].poster, use_column_width=True)
- with col7:
- st.button(
- label=f"{top8.iloc[7].title}",
- key=top8.iloc[7].title,
- on_click=lambda: webbrowser.open_new_tab(top8.iloc[7].anime_url),
- use_container_width=True,
- )
- st.image(top8.iloc[7].poster, use_column_width=True)
-
-
-# Function to display the top 8 animes for user given genre
-def top_animes_genres(genre_select):
- style_for_page = """
-
- """
- st.markdown(style_for_page, unsafe_allow_html=True)
-
- top_8_genre = anime_data[
- anime_data["genres"].str.contains(genre_select)
- ].sort_values("score", ascending=False)[:8]
- col0, col1, col2, col3 = st.columns(4)
- with col0:
- st.button(
- label=f"{top_8_genre.iloc[0].title}",
- key=top_8_genre.iloc[0].title,
- on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[0].anime_url),
- use_container_width=True,
- )
- st.image(top_8_genre.iloc[0].poster, use_column_width=True)
- with col1:
- st.button(
- label=f"{top_8_genre.iloc[1].title}",
- key=top_8_genre.iloc[1].title,
- on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[1].anime_url),
- use_container_width=True,
- )
- st.image(top_8_genre.iloc[1].poster, use_column_width=True)
- with col2:
- st.button(
- label=f"{top_8_genre.iloc[2].title}",
- key=top_8_genre.iloc[2].title,
- on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[2].anime_url),
- use_container_width=True,
- )
- st.image(top_8_genre.iloc[2].poster, use_column_width=True)
- with col3:
- st.button(
- label=f"{top_8_genre.iloc[3].title}",
- key=top_8_genre.iloc[3].title,
- on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[3].anime_url),
- use_container_width=True,
- )
- st.image(top_8_genre.iloc[3].poster, use_column_width=True)
-
- st.divider()
-
- col4, col5, col6, col7 = st.columns(4)
- with col4:
- st.button(
- label=f"{top_8_genre.iloc[4].title}",
- key=top_8_genre.iloc[4].title,
- on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[4].anime_url),
- use_container_width=True,
- )
- st.image(top_8_genre.iloc[4].poster, use_column_width=True)
- with col5:
- st.button(
- label=f"{top_8_genre.iloc[5].title}",
- key=top_8_genre.iloc[5].title,
- on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[5].anime_url),
- use_container_width=True,
- )
- st.image(top_8_genre.iloc[5].poster, use_column_width=True)
- with col6:
- st.button(
- label=f"{top_8_genre.iloc[6].title}",
- key=top_8_genre.iloc[6].title,
- on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[6].anime_url),
- use_container_width=True,
- )
- st.image(top_8_genre.iloc[6].poster, use_column_width=True)
- with col7:
- st.button(
- label=f"{top_8_genre.iloc[7].title}",
- key=top_8_genre.iloc[7].title,
- on_click=lambda: webbrowser.open_new_tab(top_8_genre.iloc[7].anime_url),
- use_container_width=True,
- )
- st.image(top_8_genre.iloc[7].poster, use_column_width=True)
-
-
-# Function to display the top 8 animes with user given anime name for all genres
-def top_animes_custom(anime_select):
- style_for_page = """
-
- """
- st.markdown(style_for_page, unsafe_allow_html=True)
-
- (
- recommended_anime_names,
- recommended_anime_posters,
- recommended_anime_urls,
- ) = recommend(anime_select)
- with st.container():
- col0, col1, col2, col3 = st.columns(4)
- with col0:
- st.button(
- label=f"{recommended_anime_names[0]}",
- key=recommended_anime_names[0],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[0]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[0], use_column_width=True)
- with col1:
- st.button(
- label=f"{recommended_anime_names[1]}",
- key=recommended_anime_names[1],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[1]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[1], use_column_width=True)
- with col2:
- st.button(
- label=f"{recommended_anime_names[2]}",
- key=recommended_anime_names[2],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[2]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[2], use_column_width=True)
- with col3:
- st.button(
- label=f"{recommended_anime_names[3]}",
- key=recommended_anime_names[3],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[3]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[3], use_column_width=True)
-
- st.divider()
-
- with st.container():
- col4, col5, col6, col7 = st.columns(4)
- with col4:
- st.button(
- label=f"{recommended_anime_names[4]}",
- key=recommended_anime_names[4],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[4]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[4], use_column_width=True)
- with col5:
- st.button(
- label=f"{recommended_anime_names[5]}",
- key=recommended_anime_names[5],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[5]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[5], use_column_width=True)
- with col6:
- st.button(
- label=f"{recommended_anime_names[6]}",
- key=recommended_anime_names[6],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[6]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[6], use_column_width=True)
- with col7:
- st.button(
- label=f"{recommended_anime_names[7]}",
- key=recommended_anime_names[7],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[7]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[7], use_column_width=True)
-
-
-# Function to display the top 8 animes with user given anime name and genre
-def top_animes_custom_genres(anime_select, genre_select):
- style_for_page = """
-
- """
- st.markdown(style_for_page, unsafe_allow_html=True)
-
- (
- recommended_anime_names,
- recommended_anime_posters,
- recommended_anime_urls,
- ) = recommend(anime_select, genre_select)
- with st.container():
- col0, col1, col2, col3 = st.columns(4)
- with col0:
- st.button(
- label=f"{recommended_anime_names[0]}",
- key=recommended_anime_names[0],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[0]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[0], use_column_width=True)
- with col1:
- st.button(
- label=f"{recommended_anime_names[1]}",
- key=recommended_anime_names[1],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[1]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[1], use_column_width=True)
- with col2:
- st.button(
- label=f"{recommended_anime_names[2]}",
- key=recommended_anime_names[2],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[2]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[2], use_column_width=True)
- with col3:
- st.button(
- label=f"{recommended_anime_names[3]}",
- key=recommended_anime_names[3],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[3]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[3], use_column_width=True)
-
- st.divider()
-
- with st.container():
- col4, col5, col6, col7 = st.columns(4)
- with col4:
- st.button(
- label=f"{recommended_anime_names[4]}",
- key=recommended_anime_names[4],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[4]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[4], use_column_width=True)
- with col5:
- st.button(
- label=f"{recommended_anime_names[5]}",
- key=recommended_anime_names[5],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[5]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[5], use_column_width=True)
- with col6:
- st.button(
- label=f"{recommended_anime_names[6]}",
- key=recommended_anime_names[6],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[6]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[6], use_column_width=True)
- with col7:
- st.button(
- label=f"{recommended_anime_names[7]}",
- key=recommended_anime_names[7],
- on_click=lambda: webbrowser.open_new_tab(recommended_anime_urls[7]),
- use_container_width=True,
- )
- st.image(recommended_anime_posters[7], use_column_width=True)
-
-
-# Recommender Page
-def recommender_page():
- style_for_page = """
-
- """
- st.markdown(style_for_page, unsafe_allow_html=True)
-
- st.title("Anime Recommendation System :ninja:")
-
- anime_list = anime_data["title"].tolist()
- anime_list.sort()
- anime_list.insert(0, "Top 8 Animes")
- anime_select = st.selectbox("Select an Anime", anime_list, key="anime_select")
- genre_select = st.selectbox("Select a Genre", get_genres(), key="genre_select")
-
- if st.button("Recommendation"):
- st.divider()
- if anime_select == "Top 8 Animes" and genre_select == "All Genres":
- top_animes()
- st.divider()
- elif anime_select == "Top 8 Animes" and genre_select != "All Genres":
- top_animes_genres(genre_select)
- st.divider()
- elif anime_select != "Top 8 Animes" and genre_select == "All Genres":
- top_animes_custom(anime_select)
- st.divider()
- elif anime_select != "Top 8 Animes" and genre_select != "All Genres":
- top_animes_custom_genres(anime_select, genre_select)
- st.divider()
-
-
-if __name__ == "__main__":
- recommender_page()
diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/modules.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/modules.py
deleted file mode 100644
index a192251aaccb036780d77d6c8b538b652a5e24e2..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/modules.py
+++ /dev/null
@@ -1,276 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-4):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- n_dims = len(x.shape)
- mean = torch.mean(x, 1, keepdim=True)
- variance = torch.mean((x - mean) ** 2, 1, keepdim=True)
-
- x = (x - mean) * torch.rsqrt(variance + self.eps)
-
- shape = [1, -1] + [1] * (n_dims - 2)
- x = x * self.gamma.view(*shape) + self.beta.view(*shape)
- return x
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- assert hidden_channels % 2 == 0
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask=None, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- x_in = self.drop(x_in)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- x = (x + res_skip_acts[:, : self.hidden_channels, :]) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ActNorm(nn.Module):
- def __init__(self, channels, ddi=False, **kwargs):
- super().__init__()
- self.channels = channels
- self.initialized = not ddi
-
- self.logs = nn.Parameter(torch.zeros(1, channels, 1))
- self.bias = nn.Parameter(torch.zeros(1, channels, 1))
-
- def forward(self, x, x_mask=None, reverse=False, **kwargs):
- if x_mask is None:
- x_mask = torch.ones(x.size(0), 1, x.size(2)).to(
- device=x.device, dtype=x.dtype
- )
- x_len = torch.sum(x_mask, [1, 2])
- if not self.initialized:
- self.initialize(x, x_mask)
- self.initialized = True
-
- if reverse:
- z = (x - self.bias) * torch.exp(-self.logs) * x_mask
- logdet = None
- else:
- z = (self.bias + torch.exp(self.logs) * x) * x_mask
- logdet = torch.sum(self.logs) * x_len # [b]
-
- return z, logdet
-
- def store_inverse(self):
- pass
-
- def set_ddi(self, ddi):
- self.initialized = not ddi
-
- def initialize(self, x, x_mask):
- with torch.no_grad():
- denom = torch.sum(x_mask, [0, 2])
- m = torch.sum(x * x_mask, [0, 2]) / denom
- m_sq = torch.sum(x * x * x_mask, [0, 2]) / denom
- v = m_sq - (m ** 2)
- logs = 0.5 * torch.log(torch.clamp_min(v, 1e-6))
-
- bias_init = (
- (-m * torch.exp(-logs)).view(*self.bias.shape).to(dtype=self.bias.dtype)
- )
- logs_init = (-logs).view(*self.logs.shape).to(dtype=self.logs.dtype)
-
- self.bias.data.copy_(bias_init)
- self.logs.data.copy_(logs_init)
-
-
-class InvConvNear(nn.Module):
- def __init__(self, channels, n_split=4, no_jacobian=False, **kwargs):
- super().__init__()
- assert n_split % 2 == 0
- self.channels = channels
- self.n_split = n_split
- self.no_jacobian = no_jacobian
-
- w_init = torch.qr(torch.FloatTensor(self.n_split, self.n_split).normal_())[0]
- if torch.det(w_init) < 0:
- w_init[:, 0] = -1 * w_init[:, 0]
- self.weight = nn.Parameter(w_init)
-
- def forward(self, x, x_mask=None, reverse=False, **kwargs):
- b, c, t = x.size()
- assert c % self.n_split == 0
- if x_mask is None:
- x_mask = 1
- x_len = torch.ones((b,), dtype=x.dtype, device=x.device) * t
- else:
- x_len = torch.sum(x_mask, [1, 2])
-
- x = x.view(b, 2, c // self.n_split, self.n_split // 2, t)
- x = (
- x.permute(0, 1, 3, 2, 4)
- .contiguous()
- .view(b, self.n_split, c // self.n_split, t)
- )
-
- if reverse:
- if hasattr(self, "weight_inv"):
- weight = self.weight_inv
- else:
- weight = torch.inverse(self.weight.float()).to(dtype=self.weight.dtype)
- logdet = None
- else:
- weight = self.weight
- if self.no_jacobian:
- logdet = 0
- else:
- logdet = torch.logdet(self.weight) * (c / self.n_split) * x_len # [b]
-
- weight = weight.view(self.n_split, self.n_split, 1, 1)
- z = F.conv2d(x, weight)
-
- z = z.view(b, 2, self.n_split // 2, c // self.n_split, t)
- z = z.permute(0, 1, 3, 2, 4).contiguous().view(b, c, t) * x_mask
- return z, logdet
-
- def store_inverse(self):
- self.weight_inv = torch.inverse(self.weight.float()).to(dtype=self.weight.dtype)
diff --git a/spaces/Hila/RobustViT/app.py b/spaces/Hila/RobustViT/app.py
deleted file mode 100644
index ddb0ea6e42736d06def4d2e21353dbb6c857321b..0000000000000000000000000000000000000000
--- a/spaces/Hila/RobustViT/app.py
+++ /dev/null
@@ -1,175 +0,0 @@
-import torch
-import timm
-import gradio as gr
-from huggingface_hub import hf_hub_download
-import os
-from ViT.ViT_new import vit_base_patch16_224 as vit
-import torchvision.transforms as transforms
-import requests
-from PIL import Image
-import numpy as np
-import cv2
-import pathlib
-
-
-# create heatmap from mask on image
-def show_cam_on_image(img, mask):
- heatmap = cv2.applyColorMap(np.uint8(255 * mask), cv2.COLORMAP_JET)
- heatmap = np.float32(heatmap) / 255
- cam = heatmap + np.float32(img)
- cam = cam / np.max(cam)
- return cam
-
-start_layer = 0
-
-# rule 5 from paper
-def avg_heads(cam, grad):
- cam = cam.reshape(-1, cam.shape[-2], cam.shape[-1])
- grad = grad.reshape(-1, grad.shape[-2], grad.shape[-1])
- cam = grad * cam
- cam = cam.clamp(min=0).mean(dim=0)
- return cam
-
-# rule 6 from paper
-def apply_self_attention_rules(R_ss, cam_ss):
- R_ss_addition = torch.matmul(cam_ss, R_ss)
- return R_ss_addition
-
-def generate_relevance(model, input, index=None):
- output = model(input, register_hook=True)
- if index == None:
- index = np.argmax(output.cpu().data.numpy(), axis=-1)
-
- one_hot = np.zeros((1, output.size()[-1]), dtype=np.float32)
- one_hot[0, index] = 1
- one_hot_vector = one_hot
- one_hot = torch.from_numpy(one_hot).requires_grad_(True)
- one_hot = torch.sum(one_hot * output)
- model.zero_grad()
- one_hot.backward(retain_graph=True)
-
- num_tokens = model.blocks[0].attn.get_attention_map().shape[-1]
- R = torch.eye(num_tokens, num_tokens)
- for i,blk in enumerate(model.blocks):
- if i < start_layer:
- continue
- grad = blk.attn.get_attn_gradients()
- cam = blk.attn.get_attention_map()
- cam = avg_heads(cam, grad)
- R += apply_self_attention_rules(R, cam)
- return R[0, 1:]
-
-def generate_visualization(model, original_image, class_index=None):
- with torch.enable_grad():
- transformer_attribution = generate_relevance(model, original_image.unsqueeze(0), index=class_index).detach()
- transformer_attribution = transformer_attribution.reshape(1, 1, 14, 14)
- transformer_attribution = torch.nn.functional.interpolate(transformer_attribution, scale_factor=16, mode='bilinear')
- transformer_attribution = transformer_attribution.reshape(224, 224).data.cpu().numpy()
- transformer_attribution = (transformer_attribution - transformer_attribution.min()) / (transformer_attribution.max() - transformer_attribution.min())
-
- image_transformer_attribution = original_image.permute(1, 2, 0).data.cpu().numpy()
- image_transformer_attribution = (image_transformer_attribution - image_transformer_attribution.min()) / (image_transformer_attribution.max() - image_transformer_attribution.min())
- vis = show_cam_on_image(image_transformer_attribution, transformer_attribution)
- vis = np.uint8(255 * vis)
- vis = cv2.cvtColor(np.array(vis), cv2.COLOR_RGB2BGR)
- return vis
-
-model_finetuned = None
-model = None
-
-normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
-transform_224 = transforms.Compose([
- transforms.ToTensor(),
- normalize,
-])
-
-# Download human-readable labels for ImageNet.
-response = requests.get("https://git.io/JJkYN")
-labels = response.text.split("\n")
-
-def image_classifier(inp):
- image = transform_224(inp)
- print(image.shape)
- #return model_finetuned(image.unsqueeze(0))
- with torch.no_grad():
- prediction = torch.nn.functional.softmax(model_finetuned(image.unsqueeze(0))[0], dim=0)
- confidences = {labels[i]: float(prediction[i]) for i in range(1000)}
- heatmap = generate_visualization(model_finetuned, image)
-
- prediction_orig = torch.nn.functional.softmax(model(image.unsqueeze(0))[0], dim=0)
- confidences_orig = {labels[i]: float(prediction_orig[i]) for i in range(1000)}
- heatmap_orig = generate_visualization(model, image)
- return confidences, heatmap, confidences_orig, heatmap_orig
-
-def _load_model(model_name: str):
- global model_finetuned, model
- path = hf_hub_download('Hila/RobustViT',
- f'{model_name}')
-
- model = vit(pretrained=True)
- model.eval()
- model_finetuned = vit()
- checkpoint = torch.load(path, map_location='cpu')
- model_finetuned.load_state_dict(checkpoint['state_dict'])
- model_finetuned.eval()
-
-_load_model('ar_base.tar')
-
-def _set_example_image(example: list) -> dict:
- return gr.Image.update(value=example[0])
-
-def _clear_image():
- return None
-
-demo = gr.Blocks(css='style.css')
-
-with demo:
-
-
- with gr.Row():
- with gr.Column():
- gr.Markdown('## [Optimizing Relevance Maps of Vision Transformers Improves Robustness](https://github.com/hila-chefer/RobustViT) - Official Demo')
- # gr.Markdown('This is an official demo for [Optimizing Relevance Maps of Vision Transformers Improves Robustness](https://github.com/hila-chefer/RobustViT).')
- gr.Markdown('Select or upload an image and then click **Submit** to see the output.')
- with gr.Row():
- input_image = gr.Image(shape=(224,224))
- with gr.Row():
- btn = gr.Button("Submit", variant="primary")
- clear_btn = gr.Button('Clear')
- with gr.Column():
- gr.Markdown('### Examples')
- gr.Markdown('#### Corrected Prediction')
- with gr.Row():
- paths = sorted(pathlib.Path('samples/corrected').rglob('*.png'))
- corrected_pred_examples = gr.Dataset(components=[input_image], headers=['header'],
- samples=[[path.as_posix()] for path in paths])
-
- gr.Markdown('#### Improved Explainability')
- with gr.Row():
- paths = sorted(pathlib.Path('samples/better_expl').rglob('*.png'))
- better_expl = gr.Dataset(components=[input_image], headers=['header'],
- samples=[[path.as_posix()] for path in paths])
-
-
- #gr.Markdown('### Results:')
-
- with gr.Row():
- with gr.Column():
- gr.Markdown('### Ours (finetuned model)')
- out1 = gr.outputs.Label(label="Our Classification", num_top_classes=3)
- out2 = gr.Image(label="Our Relevance",shape=(224,224), elem_id="expl1")
-
- with gr.Column():
- gr.Markdown('### Original model')
- out3 = gr.outputs.Label(label="Original Classification", num_top_classes=3)
- out4 = gr.Image(label="Original Relevance",shape=(224,224),elem_id="expl2")
-
-
- corrected_pred_examples.click(fn=_set_example_image, inputs=corrected_pred_examples, outputs=input_image)
- better_expl.click(fn=_set_example_image, inputs=better_expl, outputs=input_image)
- btn.click(fn=image_classifier, inputs=input_image, outputs=[out1, out2, out3, out4])
- clear_btn.click(fn=_clear_image, inputs=[], outputs=[input_image])
-
-
-demo.launch()
-
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/distributed_fairseq_model.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/distributed_fairseq_model.py
deleted file mode 100644
index 5eda2276404ca686be124901674ddfe36bd6dfd1..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/models/distributed_fairseq_model.py
+++ /dev/null
@@ -1,146 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-import signal
-import threading
-
-import torch
-import torch.nn as nn
-from torch.nn.parallel import DistributedDataParallel
-
-from fairseq.distributed import (
- DistributedTimeoutWrapper,
- LegacyDistributedDataParallel,
- ModuleProxyWrapper,
- TPUDistributedDataParallel,
-)
-
-
-logger = logging.getLogger(__name__)
-
-
-_GOSSIP_DISABLED = False
-try:
- import gossip
-except ImportError:
- _GOSSIP_DISABLED = True
-
-
-def DistributedFairseqModel(args, model, process_group, device):
- """
- Wrap a *model* to support distributed data parallel training.
-
- This is similar to the built-in DistributedDataParallel, but allows
- additional configuration of the DistributedDataParallel class to
- use, and also provides easier access to the wrapped model by
- forwarding requests for missing attributes to the wrapped model.
-
- Args:
- args (argparse.Namespace): fairseq args
- model (BaseFairseqModel): model to wrap
- process_group: the c10d process group to be used for distributed data
- parallel all-reduction.
- device: device to move model to
- """
- assert isinstance(model, nn.Module)
- if args.tpu:
- wrapped_model = TPUDistributedDataParallel(
- module=model.to(device),
- process_group=process_group,
- )
- # forward missing getattr and state_dict/load_state_dict to orig model
- wrapped_model = ModuleProxyWrapper(wrapped_model)
- elif args.ddp_backend in {"c10d", "pytorch_ddp"}:
- wrapped_model = DistributedDataParallel(
- module=model.to(device),
- device_ids=[args.device_id],
- output_device=args.device_id,
- broadcast_buffers=args.broadcast_buffers,
- bucket_cap_mb=args.bucket_cap_mb,
- process_group=process_group,
- find_unused_parameters=args.find_unused_parameters,
- gradient_as_bucket_view=args.gradient_as_bucket_view,
- )
- if args.ddp_comm_hook == "fp16":
- logger.info("enable fp16 communication hook in DDP")
- try:
- from torch.distributed.algorithms.ddp_comm_hooks import (
- register_ddp_comm_hook,
- DDPCommHookType,
- )
- except:
- logger.error(
- "Could not import from torch.distributed.algorithms.ddp_comm_hooks; you may need to update your pytorch version"
- )
- raise
-
- register_ddp_comm_hook(DDPCommHookType.FP16_COMPRESS, wrapped_model)
- # forward missing getattr and state_dict/load_state_dict to orig model
- wrapped_model = ModuleProxyWrapper(wrapped_model)
- elif args.ddp_backend in {"no_c10d", "legacy_ddp"}:
- wrapped_model = LegacyDistributedDataParallel(
- module=model.to(device),
- buffer_size=2 ** 28,
- process_group=process_group,
- )
- # forward missing getattr and state_dict/load_state_dict to orig model
- wrapped_model = ModuleProxyWrapper(wrapped_model)
- elif args.ddp_backend == "slow_mo":
- if _GOSSIP_DISABLED:
- raise ImportError(
- "Cannot find gossip library. Please install from: "
- "github.com/facebookresearch/stochastic_gradient_push"
- )
-
- # The values of slowmo_momentum below were obtained by tuning on the
- # En-De 16 dataset by training the transformer_wmt_en_de_large model
- if args.slowmo_momentum is None:
- if args.distributed_world_size <= 16:
- args.slowmo_momentum = 0.0
- elif args.distributed_world_size <= 32:
- args.slowmo_momentum = 0.2
- elif args.distributed_world_size <= 64:
- args.slowmo_momentum = 0.5
- else:
- args.slowmo_momentum = 0.6
-
- wrapped_model = gossip.GossipDataParallel(
- module=model.to(device),
- device_ids=[args.device_id],
- output_device=args.device_id,
- broadcast_buffers=args.broadcast_buffers,
- nprocs_per_node=args.nprocs_per_node,
- slowmo_momentum=args.slowmo_momentum,
- localsgd=(args.slowmo_algorithm == "LocalSGD"),
- localsgd_frequency=args.localsgd_frequency,
- )
- # forward missing getattr and state_dict/load_state_dict to orig model
- wrapped_model = ModuleProxyWrapper(wrapped_model)
- elif args.ddp_backend == "fully_sharded":
- try:
- from fairscale.nn.data_parallel import FullyShardedDataParallel as FSDP
- except ImportError:
- raise ImportError(
- "Cannot find FullyShardedDataParallel. "
- "Please install fairscale with: pip install fairscale"
- )
- assert isinstance(model, FSDP), "expected model to already be wrapped in FSDP"
- wrapped_model = model
- if args.memory_efficient_fp16:
- wrapped_model = wrapped_model.half()
- if not args.cpu_offload:
- wrapped_model = wrapped_model.to(device=device)
- else:
- raise ValueError("Unknown --ddp-backend: " + args.ddp_backend)
-
- # kill hung distributed jobs after a timeout
- if getattr(args, "heartbeat_timeout", -1) > 0:
- wrapped_model = DistributedTimeoutWrapper(
- wrapped_model, timeout=getattr(args, "heartbeat_timeout", -1)
- )
-
- return wrapped_model
diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/social_distancing.py b/spaces/Ibtehaj10/cheating-detection-FYP/social_distancing.py
deleted file mode 100644
index 5e3d78e8d57d90154165119168a2a91f8ab450e1..0000000000000000000000000000000000000000
--- a/spaces/Ibtehaj10/cheating-detection-FYP/social_distancing.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import cv2
-import datetime
-import imutils
-import numpy as np
-from centroidtracker import CentroidTracker
-from itertools import combinations
-import math
-
-protopath = "MobileNetSSD_deploy.prototxt"
-modelpath = "MobileNetSSD_deploy.caffemodel"
-detector = cv2.dnn.readNetFromCaffe(prototxt=protopath, caffeModel=modelpath)
-# detector.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
-# detector.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
-
-
-CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
- "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
- "dog", "horse", "motorbike", "person", "pottedplant", "sheep",
- "sofa", "train", "tvmonitor"]
-
-tracker = CentroidTracker(maxDisappeared=40, maxDistance=50)
-
-
-def non_max_suppression_fast(boxes, overlapThresh):
- try:
- if len(boxes) == 0:
- return []
-
- if boxes.dtype.kind == "i":
- boxes = boxes.astype("float")
-
- pick = []
-
- x1 = boxes[:, 0]
- y1 = boxes[:, 1]
- x2 = boxes[:, 2]
- y2 = boxes[:, 3]
-
- area = (x2 - x1 + 1) * (y2 - y1 + 1)
- idxs = np.argsort(y2)
-
- while len(idxs) > 0:
- last = len(idxs) - 1
- i = idxs[last]
- pick.append(i)
-
- xx1 = np.maximum(x1[i], x1[idxs[:last]])
- yy1 = np.maximum(y1[i], y1[idxs[:last]])
- xx2 = np.minimum(x2[i], x2[idxs[:last]])
- yy2 = np.minimum(y2[i], y2[idxs[:last]])
-
- w = np.maximum(0, xx2 - xx1 + 1)
- h = np.maximum(0, yy2 - yy1 + 1)
-
- overlap = (w * h) / area[idxs[:last]]
-
- idxs = np.delete(idxs, np.concatenate(([last],
- np.where(overlap > overlapThresh)[0])))
-
- return boxes[pick].astype("int")
- except Exception as e:
- print("Exception occurred in non_max_suppression : {}".format(e))
-
-
-def main():
- cap = cv2.VideoCapture('testvideo2.mp4')
-
- fps_start_time = datetime.datetime.now()
- fps = 0
- total_frames = 0
-
- while True:
- ret, frame = cap.read()
- frame = imutils.resize(frame, width=600)
- total_frames = total_frames + 1
-
- (H, W) = frame.shape[:2]
-
- blob = cv2.dnn.blobFromImage(frame, 0.007843, (W, H), 127.5)
-
- detector.setInput(blob)
- person_detections = detector.forward()
- rects = []
- for i in np.arange(0, person_detections.shape[2]):
- confidence = person_detections[0, 0, i, 2]
- if confidence > 0.5:
- idx = int(person_detections[0, 0, i, 1])
-
- if CLASSES[idx] != "person":
- continue
-
- person_box = person_detections[0, 0, i, 3:7] * np.array([W, H, W, H])
- (startX, startY, endX, endY) = person_box.astype("int")
- rects.append(person_box)
-
- boundingboxes = np.array(rects)
- boundingboxes = boundingboxes.astype(int)
- rects = non_max_suppression_fast(boundingboxes, 0.3)
- centroid_dict = dict()
- objects = tracker.update(rects)
- for (objectId, bbox) in objects.items():
- x1, y1, x2, y2 = bbox
- x1 = int(x1)
- y1 = int(y1)
- x2 = int(x2)
- y2 = int(y2)
- cX = int((x1 + x2) / 2.0)
- cY = int((y1 + y2) / 2.0)
-
-
- centroid_dict[objectId] = (cX, cY, x1, y1, x2, y2)
-
- # text = "ID: {}".format(objectId)
- # cv2.putText(frame, text, (x1, y1-5), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1)
-
- red_zone_list = []
- for (id1, p1), (id2, p2) in combinations(centroid_dict.items(), 2):
- dx, dy = p1[0] - p2[0], p1[1] - p2[1]
- distance = math.sqrt(dx * dx + dy * dy)
- if distance < 75.0:
- if id1 not in red_zone_list:
- red_zone_list.append(id1)
- if id2 not in red_zone_list:
- red_zone_list.append(id2)
-
- for id, box in centroid_dict.items():
- if id in red_zone_list:
- cv2.rectangle(frame, (box[2], box[3]), (box[4], box[5]), (0, 0, 255), 2)
- else:
- cv2.rectangle(frame, (box[2], box[3]), (box[4], box[5]), (0, 255, 0), 2)
-
-
- fps_end_time = datetime.datetime.now()
- time_diff = fps_end_time - fps_start_time
- if time_diff.seconds == 0:
- fps = 0.0
- else:
- fps = (total_frames / time_diff.seconds)
-
- fps_text = "FPS: {:.2f}".format(fps)
-
- cv2.putText(frame, fps_text, (5, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0, 0, 255), 1)
-
- cv2.imshow("Application", frame)
- key = cv2.waitKey(1)
- if key == ord('q'):
- break
-
- cv2.destroyAllWindows()
-
-
-main()
diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/ops/dcn/deform_conv.py b/spaces/Iceclear/StableSR/StableSR/basicsr/ops/dcn/deform_conv.py
deleted file mode 100644
index 6268ca825d59ef4a30d4d2156c4438cbbe9b3c1e..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/basicsr/ops/dcn/deform_conv.py
+++ /dev/null
@@ -1,379 +0,0 @@
-import math
-import os
-import torch
-from torch import nn as nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn import functional as F
-from torch.nn.modules.utils import _pair, _single
-
-BASICSR_JIT = os.getenv('BASICSR_JIT')
-if BASICSR_JIT == 'True':
- from torch.utils.cpp_extension import load
- module_path = os.path.dirname(__file__)
- deform_conv_ext = load(
- 'deform_conv',
- sources=[
- os.path.join(module_path, 'src', 'deform_conv_ext.cpp'),
- os.path.join(module_path, 'src', 'deform_conv_cuda.cpp'),
- os.path.join(module_path, 'src', 'deform_conv_cuda_kernel.cu'),
- ],
- )
-else:
- try:
- from . import deform_conv_ext
- except ImportError:
- pass
- # avoid annoying print output
- # print(f'Cannot import deform_conv_ext. Error: {error}. You may need to: \n '
- # '1. compile with BASICSR_EXT=True. or\n '
- # '2. set BASICSR_JIT=True during running')
-
-
-class DeformConvFunction(Function):
-
- @staticmethod
- def forward(ctx,
- input,
- offset,
- weight,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1,
- im2col_step=64):
- if input is not None and input.dim() != 4:
- raise ValueError(f'Expected 4D tensor as input, got {input.dim()}D tensor instead.')
- ctx.stride = _pair(stride)
- ctx.padding = _pair(padding)
- ctx.dilation = _pair(dilation)
- ctx.groups = groups
- ctx.deformable_groups = deformable_groups
- ctx.im2col_step = im2col_step
-
- ctx.save_for_backward(input, offset, weight)
-
- output = input.new_empty(DeformConvFunction._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride))
-
- ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones
-
- if not input.is_cuda:
- raise NotImplementedError
- else:
- cur_im2col_step = min(ctx.im2col_step, input.shape[0])
- assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize'
- deform_conv_ext.deform_conv_forward(input, weight,
- offset, output, ctx.bufs_[0], ctx.bufs_[1], weight.size(3),
- weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1],
- ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups,
- ctx.deformable_groups, cur_im2col_step)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- input, offset, weight = ctx.saved_tensors
-
- grad_input = grad_offset = grad_weight = None
-
- if not grad_output.is_cuda:
- raise NotImplementedError
- else:
- cur_im2col_step = min(ctx.im2col_step, input.shape[0])
- assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize'
-
- if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]:
- grad_input = torch.zeros_like(input)
- grad_offset = torch.zeros_like(offset)
- deform_conv_ext.deform_conv_backward_input(input, offset, grad_output, grad_input,
- grad_offset, weight, ctx.bufs_[0], weight.size(3),
- weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1],
- ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups,
- ctx.deformable_groups, cur_im2col_step)
-
- if ctx.needs_input_grad[2]:
- grad_weight = torch.zeros_like(weight)
- deform_conv_ext.deform_conv_backward_parameters(input, offset, grad_output, grad_weight,
- ctx.bufs_[0], ctx.bufs_[1], weight.size(3),
- weight.size(2), ctx.stride[1], ctx.stride[0],
- ctx.padding[1], ctx.padding[0], ctx.dilation[1],
- ctx.dilation[0], ctx.groups, ctx.deformable_groups, 1,
- cur_im2col_step)
-
- return (grad_input, grad_offset, grad_weight, None, None, None, None, None)
-
- @staticmethod
- def _output_size(input, weight, padding, dilation, stride):
- channels = weight.size(0)
- output_size = (input.size(0), channels)
- for d in range(input.dim() - 2):
- in_size = input.size(d + 2)
- pad = padding[d]
- kernel = dilation[d] * (weight.size(d + 2) - 1) + 1
- stride_ = stride[d]
- output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, )
- if not all(map(lambda s: s > 0, output_size)):
- raise ValueError(f'convolution input is too small (output would be {"x".join(map(str, output_size))})')
- return output_size
-
-
-class ModulatedDeformConvFunction(Function):
-
- @staticmethod
- def forward(ctx,
- input,
- offset,
- mask,
- weight,
- bias=None,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1):
- ctx.stride = stride
- ctx.padding = padding
- ctx.dilation = dilation
- ctx.groups = groups
- ctx.deformable_groups = deformable_groups
- ctx.with_bias = bias is not None
- if not ctx.with_bias:
- bias = input.new_empty(1) # fake tensor
- if not input.is_cuda:
- raise NotImplementedError
- if weight.requires_grad or mask.requires_grad or offset.requires_grad or input.requires_grad:
- ctx.save_for_backward(input, offset, mask, weight, bias)
- output = input.new_empty(ModulatedDeformConvFunction._infer_shape(ctx, input, weight))
- ctx._bufs = [input.new_empty(0), input.new_empty(0)]
- deform_conv_ext.modulated_deform_conv_forward(input, weight, bias, ctx._bufs[0], offset, mask, output,
- ctx._bufs[1], weight.shape[2], weight.shape[3], ctx.stride,
- ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation,
- ctx.groups, ctx.deformable_groups, ctx.with_bias)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- if not grad_output.is_cuda:
- raise NotImplementedError
- input, offset, mask, weight, bias = ctx.saved_tensors
- grad_input = torch.zeros_like(input)
- grad_offset = torch.zeros_like(offset)
- grad_mask = torch.zeros_like(mask)
- grad_weight = torch.zeros_like(weight)
- grad_bias = torch.zeros_like(bias)
- deform_conv_ext.modulated_deform_conv_backward(input, weight, bias, ctx._bufs[0], offset, mask, ctx._bufs[1],
- grad_input, grad_weight, grad_bias, grad_offset, grad_mask,
- grad_output, weight.shape[2], weight.shape[3], ctx.stride,
- ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation,
- ctx.groups, ctx.deformable_groups, ctx.with_bias)
- if not ctx.with_bias:
- grad_bias = None
-
- return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, None, None, None, None, None)
-
- @staticmethod
- def _infer_shape(ctx, input, weight):
- n = input.size(0)
- channels_out = weight.size(0)
- height, width = input.shape[2:4]
- kernel_h, kernel_w = weight.shape[2:4]
- height_out = (height + 2 * ctx.padding - (ctx.dilation * (kernel_h - 1) + 1)) // ctx.stride + 1
- width_out = (width + 2 * ctx.padding - (ctx.dilation * (kernel_w - 1) + 1)) // ctx.stride + 1
- return n, channels_out, height_out, width_out
-
-
-deform_conv = DeformConvFunction.apply
-modulated_deform_conv = ModulatedDeformConvFunction.apply
-
-
-class DeformConv(nn.Module):
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1,
- bias=False):
- super(DeformConv, self).__init__()
-
- assert not bias
- assert in_channels % groups == 0, f'in_channels {in_channels} is not divisible by groups {groups}'
- assert out_channels % groups == 0, f'out_channels {out_channels} is not divisible by groups {groups}'
-
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = _pair(stride)
- self.padding = _pair(padding)
- self.dilation = _pair(dilation)
- self.groups = groups
- self.deformable_groups = deformable_groups
- # enable compatibility with nn.Conv2d
- self.transposed = False
- self.output_padding = _single(0)
-
- self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // self.groups, *self.kernel_size))
-
- self.reset_parameters()
-
- def reset_parameters(self):
- n = self.in_channels
- for k in self.kernel_size:
- n *= k
- stdv = 1. / math.sqrt(n)
- self.weight.data.uniform_(-stdv, stdv)
-
- def forward(self, x, offset):
- # To fix an assert error in deform_conv_cuda.cpp:128
- # input image is smaller than kernel
- input_pad = (x.size(2) < self.kernel_size[0] or x.size(3) < self.kernel_size[1])
- if input_pad:
- pad_h = max(self.kernel_size[0] - x.size(2), 0)
- pad_w = max(self.kernel_size[1] - x.size(3), 0)
- x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous()
- offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0).contiguous()
- out = deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups,
- self.deformable_groups)
- if input_pad:
- out = out[:, :, :out.size(2) - pad_h, :out.size(3) - pad_w].contiguous()
- return out
-
-
-class DeformConvPack(DeformConv):
- """A Deformable Conv Encapsulation that acts as normal Conv layers.
-
- Args:
- in_channels (int): Same as nn.Conv2d.
- out_channels (int): Same as nn.Conv2d.
- kernel_size (int or tuple[int]): Same as nn.Conv2d.
- stride (int or tuple[int]): Same as nn.Conv2d.
- padding (int or tuple[int]): Same as nn.Conv2d.
- dilation (int or tuple[int]): Same as nn.Conv2d.
- groups (int): Same as nn.Conv2d.
- bias (bool or str): If specified as `auto`, it will be decided by the
- norm_cfg. Bias will be set as True if norm_cfg is None, otherwise
- False.
- """
-
- _version = 2
-
- def __init__(self, *args, **kwargs):
- super(DeformConvPack, self).__init__(*args, **kwargs)
-
- self.conv_offset = nn.Conv2d(
- self.in_channels,
- self.deformable_groups * 2 * self.kernel_size[0] * self.kernel_size[1],
- kernel_size=self.kernel_size,
- stride=_pair(self.stride),
- padding=_pair(self.padding),
- dilation=_pair(self.dilation),
- bias=True)
- self.init_offset()
-
- def init_offset(self):
- self.conv_offset.weight.data.zero_()
- self.conv_offset.bias.data.zero_()
-
- def forward(self, x):
- offset = self.conv_offset(x)
- return deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups,
- self.deformable_groups)
-
-
-class ModulatedDeformConv(nn.Module):
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- deformable_groups=1,
- bias=True):
- super(ModulatedDeformConv, self).__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = stride
- self.padding = padding
- self.dilation = dilation
- self.groups = groups
- self.deformable_groups = deformable_groups
- self.with_bias = bias
- # enable compatibility with nn.Conv2d
- self.transposed = False
- self.output_padding = _single(0)
-
- self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, *self.kernel_size))
- if bias:
- self.bias = nn.Parameter(torch.Tensor(out_channels))
- else:
- self.register_parameter('bias', None)
- self.init_weights()
-
- def init_weights(self):
- n = self.in_channels
- for k in self.kernel_size:
- n *= k
- stdv = 1. / math.sqrt(n)
- self.weight.data.uniform_(-stdv, stdv)
- if self.bias is not None:
- self.bias.data.zero_()
-
- def forward(self, x, offset, mask):
- return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation,
- self.groups, self.deformable_groups)
-
-
-class ModulatedDeformConvPack(ModulatedDeformConv):
- """A ModulatedDeformable Conv Encapsulation that acts as normal Conv layers.
-
- Args:
- in_channels (int): Same as nn.Conv2d.
- out_channels (int): Same as nn.Conv2d.
- kernel_size (int or tuple[int]): Same as nn.Conv2d.
- stride (int or tuple[int]): Same as nn.Conv2d.
- padding (int or tuple[int]): Same as nn.Conv2d.
- dilation (int or tuple[int]): Same as nn.Conv2d.
- groups (int): Same as nn.Conv2d.
- bias (bool or str): If specified as `auto`, it will be decided by the
- norm_cfg. Bias will be set as True if norm_cfg is None, otherwise
- False.
- """
-
- _version = 2
-
- def __init__(self, *args, **kwargs):
- super(ModulatedDeformConvPack, self).__init__(*args, **kwargs)
-
- self.conv_offset = nn.Conv2d(
- self.in_channels,
- self.deformable_groups * 3 * self.kernel_size[0] * self.kernel_size[1],
- kernel_size=self.kernel_size,
- stride=_pair(self.stride),
- padding=_pair(self.padding),
- dilation=_pair(self.dilation),
- bias=True)
- self.init_weights()
-
- def init_weights(self):
- super(ModulatedDeformConvPack, self).init_weights()
- if hasattr(self, 'conv_offset'):
- self.conv_offset.weight.data.zero_()
- self.conv_offset.bias.data.zero_()
-
- def forward(self, x):
- out = self.conv_offset(x)
- o1, o2, mask = torch.chunk(out, 3, dim=1)
- offset = torch.cat((o1, o2), dim=1)
- mask = torch.sigmoid(mask)
- return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation,
- self.groups, self.deformable_groups)
diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/commons.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/commons.py
deleted file mode 100644
index db17cf0914ba6e445fe613e3ec3411b3a74b28aa..0000000000000000000000000000000000000000
--- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/commons.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- try:
- ret[i] = x[i, :, idx_str:idx_end]
- except RuntimeError:
- print("?")
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/linter.sh b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/linter.sh
deleted file mode 100644
index df2e17436d30e89ff1728109301599f425f1ad6b..0000000000000000000000000000000000000000
--- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/linter.sh
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/bash -e
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-{
- black --version | grep -E "23\." > /dev/null
-} || {
- echo "Linter requires 'black==23.*' !"
- exit 1
-}
-
-ISORT_VERSION=$(isort --version-number)
-if [[ "$ISORT_VERSION" != 5.12* ]]; then
- echo "Linter requires isort==5.12.0 !"
- exit 1
-fi
-
-echo "Running isort ..."
-isort . --atomic
-
-echo "Running black ..."
-black -l 100 .
-
-echo "Running flake8 ..."
-if [ -x "$(command -v flake8)" ]; then
- flake8 .
-else
- python3 -m flake8 .
-fi
-
-echo "Running mypy..."
-
-mypy --exclude 'setup.py|notebooks' .
diff --git a/spaces/Iqbalzz/hololive-rvc-models/infer_pack/models.py b/spaces/Iqbalzz/hololive-rvc-models/infer_pack/models.py
deleted file mode 100644
index 5e4b2e72383efaee1fae4f5c42e3db2c627e4190..0000000000000000000000000000000000000000
--- a/spaces/Iqbalzz/hololive-rvc-models/infer_pack/models.py
+++ /dev/null
@@ -1,1124 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/JUNGU/VToonify/vtoonify/model/encoder/__init__.py b/spaces/JUNGU/VToonify/vtoonify/model/encoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Jamos1/AI_gamer89-insta/app.py b/spaces/Jamos1/AI_gamer89-insta/app.py
deleted file mode 100644
index c3b950d79209e5e4b903442a861cc89227c1448e..0000000000000000000000000000000000000000
--- a/spaces/Jamos1/AI_gamer89-insta/app.py
+++ /dev/null
@@ -1,93 +0,0 @@
-import gradio as gr
-import whisper
-from pytube import YouTube
-
-
-class GradioInference():
- def __init__(self):
- self.sizes = list(whisper._MODELS.keys())
- self.langs = ["none"] + sorted(list(whisper.tokenizer.LANGUAGES.values()))
- self.current_size = "base"
- self.loaded_model = whisper.load_model(self.current_size)
- self.yt = None
-
- def __call__(self, link, lang, size, subs):
- if self.yt is None:
- self.yt = YouTube(link)
- path = self.yt.streams.filter(only_audio=True)[0].download(filename="tmp.mp4")
-
- if lang == "none":
- lang = None
-
- if size != self.current_size:
- self.loaded_model = whisper.load_model(size)
- self.current_size = size
- results = self.loaded_model.transcribe(path, language=lang)
-
- if subs == "None":
- return results["text"]
- elif subs == ".srt":
- return self.srt(results["segments"])
- elif ".csv" == ".csv":
- return self.csv(results["segments"])
-
- def srt(self, segments):
- output = ""
- for i, segment in enumerate(segments):
- output += f"{i+1}\n"
- output += f"{self.format_time(segment['start'])} --> {self.format_time(segment['end'])}\n"
- output += f"{segment['text']}\n\n"
- return output
-
- def csv(self, segments):
- output = ""
- for segment in segments:
- output += f"{segment['start']},{segment['end']},{segment['text']}\n"
- return output
-
- def format_time(self, time):
- hours = time//3600
- minutes = (time - hours*3600)//60
- seconds = time - hours*3600 - minutes*60
- milliseconds = (time - int(time))*1000
- return f"{int(hours):02d}:{int(minutes):02d}:{int(seconds):02d},{int(milliseconds):03d}"
-
- def populate_metadata(self, link):
- self.yt = YouTube(link)
- return self.yt.thumbnail_url, self.yt.title
-
-gio = GradioInference()
-title="Youtube Whisperer"
-description="Speech to text transcription of Youtube videos using OpenAI's Whisper"
-
-block = gr.Blocks()
-with block:
- gr.HTML(
- """
-
-
- Youtube Whisperer
-
-
- Speech to text transcription of Youtube videos using OpenAI's Whisper
-
-
- """
- )
- with gr.Group():
- with gr.Box():
- with gr.Row().style(equal_height=True):
- sz = gr.Dropdown(label="Model Size", choices=gio.sizes, value='base')
- lang = gr.Dropdown(label="Language (Optional)", choices=gio.langs, value="none")
- with gr.Row().style(equal_height=True):
- wt = gr.Radio(["None", ".srt", ".csv"], label="With Timestamps?")
- link = gr.Textbox(label="YouTube Link")
- title = gr.Label(label="Video Title")
- with gr.Row().style(equal_height=True):
- img = gr.Image(label="Thumbnail")
- text = gr.Textbox(label="Transcription", placeholder="Transcription Output", lines=10)
- with gr.Row().style(equal_height=True):
- btn = gr.Button("Transcribe")
- btn.click(gio, inputs=[link, lang, sz, wt], outputs=[text])
- link.change(gio.populate_metadata, inputs=[link], outputs=[img, title])
-block.launch()
\ No newline at end of file
diff --git a/spaces/JeffJing/ZookChatBot/steamship/cli/manifest_init_wizard.py b/spaces/JeffJing/ZookChatBot/steamship/cli/manifest_init_wizard.py
deleted file mode 100644
index a71dc52c4350509ca722a5c1d5f95ad140cecc20..0000000000000000000000000000000000000000
--- a/spaces/JeffJing/ZookChatBot/steamship/cli/manifest_init_wizard.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import re
-
-import click
-from click import BadParameter
-
-from steamship import Steamship
-from steamship.data.manifest import Manifest, PluginConfig, SteamshipRegistry
-from steamship.data.user import User
-
-
-def validate_handle(handle: str) -> str:
- if re.fullmatch(r"[a-z\-]+", handle) is not None:
- return handle
- else:
- raise BadParameter("Handle must only include lowercase letters and -")
-
-
-def validate_version_handle(handle: str) -> str:
- if re.fullmatch(r"[a-z0-9\-.]+", handle) is not None:
- return handle
- else:
- raise BadParameter("Handle must only include lowercase letters, numbers, . and -")
-
-
-def manifest_init_wizard(client: Steamship):
- click.secho(
- "It looks like you don't yet have a steamship.json to deploy. Let's create one.",
- fg="cyan",
- )
-
- deployable_type = click.prompt(
- "Is this a package or a plugin?",
- default="package",
- type=click.Choice(["package", "plugin"]),
- show_choices=False,
- )
-
- handle = click.prompt(
- f"What handle would you like to use for your {deployable_type}? Valid characters are a-z and -",
- value_proc=validate_handle,
- )
-
- # TODO: claim the handle right here!
-
- version_handle = "0.0.1"
-
- plugin_detail = None
- if deployable_type == "plugin":
- plugin_type = click.prompt(
- "What type of plugin is this?",
- default="tagger",
- type=click.Choice(
- ["tagger", "blockifier", "exporter", "fileImporter", "corpusImporter", "generator"]
- ),
- show_choices=True,
- )
- if plugin_type == "tagger":
- trainable = click.confirm("Is the plugin trainable?", default=False)
- else:
- trainable = False
- plugin_detail = PluginConfig(isTrainable=trainable, type=plugin_type)
-
- public = click.confirm(f"Do you want this {deployable_type} to be public?", default=True)
-
- user = User.current(client)
-
- author = click.prompt("How should we list your author name?", default=user.handle)
-
- tagline = None
- author_github = None
- if public:
- tagline = click.prompt(f"Want to give the {deployable_type} a tagline?", default="")
- author_github = click.prompt(
- "If you'd like this associated with your github account, please your github username",
- default="",
- )
-
- tag_string = click.prompt(
- f"Want to give the {deployable_type} some tags? (comma separated)", default="Prompt API"
- )
- tags = [tag.strip() for tag in tag_string.split(",")]
-
- return Manifest(
- type=deployable_type,
- handle=handle,
- version=version_handle,
- description="",
- author=author,
- public=public,
- plugin=plugin_detail,
- build_config={"ignore": ["tests", "examples"]},
- configTemplate={},
- steamshipRegistry=SteamshipRegistry(
- tagline=tagline, authorGithub=author_github, authorName=author, tags=tags
- ),
- )
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/models/azure.py b/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/models/azure.py
deleted file mode 100644
index 42cddfbda8cc74e40e114ee4bed46a2f9ff74ce9..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT_Beta/modules/models/azure.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from langchain.chat_models import AzureChatOpenAI
-import os
-
-from .base_model import Base_Chat_Langchain_Client
-
-# load_config_to_environ(["azure_openai_api_key", "azure_api_base_url", "azure_openai_api_version", "azure_deployment_name"])
-
-class Azure_OpenAI_Client(Base_Chat_Langchain_Client):
- def setup_model(self):
- # inplement this to setup the model then return it
- return AzureChatOpenAI(
- openai_api_base=os.environ["AZURE_OPENAI_API_BASE_URL"],
- openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
- deployment_name=os.environ["AZURE_DEPLOYMENT_NAME"],
- openai_api_key=os.environ["AZURE_OPENAI_API_KEY"],
- openai_api_type="azure",
- )
\ No newline at end of file
diff --git a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py b/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py
deleted file mode 100644
index 69b6d1c4b5724a3ef61f8bc3d64fc45c5e51e270..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/ONNXVITS_transforms.py
+++ /dev/null
@@ -1,196 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import numpy as np
-
-
-DEFAULT_MIN_BIN_WIDTH = 1e-3
-DEFAULT_MIN_BIN_HEIGHT = 1e-3
-DEFAULT_MIN_DERIVATIVE = 1e-3
-
-
-def piecewise_rational_quadratic_transform(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails=None,
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
-
- if tails is None:
- spline_fn = rational_quadratic_spline
- spline_kwargs = {}
- else:
- spline_fn = unconstrained_rational_quadratic_spline
- spline_kwargs = {
- 'tails': tails,
- 'tail_bound': tail_bound
- }
-
- outputs, logabsdet = spline_fn(
- inputs=inputs,
- unnormalized_widths=unnormalized_widths,
- unnormalized_heights=unnormalized_heights,
- unnormalized_derivatives=unnormalized_derivatives,
- inverse=inverse,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative,
- **spline_kwargs
- )
- return outputs, logabsdet
-
-
-def searchsorted(bin_locations, inputs, eps=1e-6):
- bin_locations[..., -1] += eps
- return torch.sum(
- inputs[..., None] >= bin_locations,
- dim=-1
- ) - 1
-
-
-def unconstrained_rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- tails='linear',
- tail_bound=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
- outside_interval_mask = ~inside_interval_mask
-
- outputs = torch.zeros_like(inputs)
- logabsdet = torch.zeros_like(inputs)
-
- if tails == 'linear':
- #unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
- unnormalized_derivatives_ = torch.zeros((1, 1, unnormalized_derivatives.size(2), unnormalized_derivatives.size(3)+2))
- unnormalized_derivatives_[...,1:-1] = unnormalized_derivatives
- unnormalized_derivatives = unnormalized_derivatives_
- constant = np.log(np.exp(1 - min_derivative) - 1)
- unnormalized_derivatives[..., 0] = constant
- unnormalized_derivatives[..., -1] = constant
-
- outputs[outside_interval_mask] = inputs[outside_interval_mask]
- logabsdet[outside_interval_mask] = 0
- else:
- raise RuntimeError('{} tails are not implemented.'.format(tails))
-
- outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline(
- inputs=inputs[inside_interval_mask],
- unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
- unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
- unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
- inverse=inverse,
- left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound,
- min_bin_width=min_bin_width,
- min_bin_height=min_bin_height,
- min_derivative=min_derivative
- )
-
- return outputs, logabsdet
-
-def rational_quadratic_spline(inputs,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=False,
- left=0., right=1., bottom=0., top=1.,
- min_bin_width=DEFAULT_MIN_BIN_WIDTH,
- min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
- min_derivative=DEFAULT_MIN_DERIVATIVE):
- if torch.min(inputs) < left or torch.max(inputs) > right:
- raise ValueError('Input to a transform is not within its domain')
-
- num_bins = unnormalized_widths.shape[-1]
-
- if min_bin_width * num_bins > 1.0:
- raise ValueError('Minimal bin width too large for the number of bins')
- if min_bin_height * num_bins > 1.0:
- raise ValueError('Minimal bin height too large for the number of bins')
-
- widths = F.softmax(unnormalized_widths, dim=-1)
- widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
- cumwidths = torch.cumsum(widths, dim=-1)
- cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0)
- cumwidths = (right - left) * cumwidths + left
- cumwidths[..., 0] = left
- cumwidths[..., -1] = right
- widths = cumwidths[..., 1:] - cumwidths[..., :-1]
-
- derivatives = min_derivative + F.softplus(unnormalized_derivatives)
-
- heights = F.softmax(unnormalized_heights, dim=-1)
- heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
- cumheights = torch.cumsum(heights, dim=-1)
- cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0)
- cumheights = (top - bottom) * cumheights + bottom
- cumheights[..., 0] = bottom
- cumheights[..., -1] = top
- heights = cumheights[..., 1:] - cumheights[..., :-1]
-
- if inverse:
- bin_idx = searchsorted(cumheights, inputs)[..., None]
- else:
- bin_idx = searchsorted(cumwidths, inputs)[..., None]
-
- input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
- input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
-
- input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
- delta = heights / widths
- input_delta = delta.gather(-1, bin_idx)[..., 0]
-
- input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
- input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
-
- input_heights = heights.gather(-1, bin_idx)[..., 0]
-
- if inverse:
- a = (((inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta)
- + input_heights * (input_delta - input_derivatives)))
- b = (input_heights * input_derivatives
- - (inputs - input_cumheights) * (input_derivatives
- + input_derivatives_plus_one
- - 2 * input_delta))
- c = - input_delta * (inputs - input_cumheights)
-
- discriminant = b.pow(2) - 4 * a * c
- assert (discriminant >= 0).all()
-
- root = (2 * c) / (-b - torch.sqrt(discriminant))
- outputs = root * input_bin_widths + input_cumwidths
-
- theta_one_minus_theta = root * (1 - root)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - root).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, -logabsdet
- else:
- theta = (inputs - input_cumwidths) / input_bin_widths
- theta_one_minus_theta = theta * (1 - theta)
-
- numerator = input_heights * (input_delta * theta.pow(2)
- + input_derivatives * theta_one_minus_theta)
- denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta)
- * theta_one_minus_theta)
- outputs = input_cumheights + numerator / denominator
-
- derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2)
- + 2 * input_delta * theta_one_minus_theta
- + input_derivatives * (1 - theta).pow(2))
- logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
-
- return outputs, logabsdet
diff --git a/spaces/Justin-Choo/Multi_diffuser-quick-diffusion-CN-ZH/README.md b/spaces/Justin-Choo/Multi_diffuser-quick-diffusion-CN-ZH/README.md
deleted file mode 100644
index cadaf9d42fc9b5b1260e9b99e815064a95e99854..0000000000000000000000000000000000000000
--- a/spaces/Justin-Choo/Multi_diffuser-quick-diffusion-CN-ZH/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Quick Diffusion Multi-diffusers
-emoji: 🎩
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.39.0
-app_file: I-am-Justin.py
-pinned: true
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/KPCGD/bingo/next.config.js b/spaces/KPCGD/bingo/next.config.js
deleted file mode 100644
index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000
--- a/spaces/KPCGD/bingo/next.config.js
+++ /dev/null
@@ -1,38 +0,0 @@
-/** @type {import('next').NextConfig} */
-const nextConfig = {
- // output: 'export',
- // assetPrefix: '.',
- webpack: (config, { isServer }) => {
- if (!isServer) {
- config.resolve = {
- ...config.resolve,
- fallback: {
- 'bufferutil': false,
- 'utf-8-validate': false,
- http: false,
- https: false,
- stream: false,
- // fixes proxy-agent dependencies
- net: false,
- dns: false,
- tls: false,
- assert: false,
- // fixes next-i18next dependencies
- path: false,
- fs: false,
- // fixes mapbox dependencies
- events: false,
- // fixes sentry dependencies
- process: false
- }
- };
- }
- config.module.exprContextCritical = false;
-
- return config;
- },
-}
-
-module.exports = (...args) => {
- return nextConfig
-}
diff --git a/spaces/Kayson/InstructDiffusion/dataset/editing/edit_zip_dataset.py b/spaces/Kayson/InstructDiffusion/dataset/editing/edit_zip_dataset.py
deleted file mode 100644
index 0d87467c24dee8175bf40b134e786884175b1e7d..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/dataset/editing/edit_zip_dataset.py
+++ /dev/null
@@ -1,494 +0,0 @@
-# --------------------------------------------------------
-# InstructDiffusion
-# Based on instruct-pix2pix (https://github.com/timothybrooks/instruct-pix2pix)
-# Modified by Tiankai Hang (tkhang@seu.edu.cn)
-# --------------------------------------------------------
-
-from __future__ import annotations
-
-import os
-import json
-import math
-from pathlib import Path
-from typing import Any
-
-import numpy as np
-import torch
-import torchvision
-from einops import rearrange
-import PIL
-from PIL import Image
-from torch.utils.data import Dataset
-from tqdm.auto import tqdm
-
-import random
-
-from dataset.utils.zip_manager import MultipleZipManager
-
-
-if hasattr(Image, "Resampling"):
- # deprecated in pillow >= 10.0.0
- RESAMPLING_METHOD = Image.Resampling.LANCZOS
-else:
- RESAMPLING_METHOD = Image.LANCZOS
-
-
-class FilteredIP2PDataset(Dataset):
- def __init__(
- self,
- path: str,
- split: str = "train",
- splits: tuple[float, float, float] = (0.9, 0.05, 0.05),
- min_resize_res: int = 256,
- max_resize_res: int = 256,
- crop_res: int = 256,
- flip_prob: float = 0.0,
- zip_start_index: int = 0,
- zip_end_index: int = 30,
- instruct: bool = False,
- max_num_images = None,
- sample_weight: float = 1.0,
- reverse_version: bool = False,
- **kwargs
- ):
- assert split in ("train", "val", "test")
- assert sum(splits) == 1
- self.path = path
- self.min_resize_res = min_resize_res
- self.max_resize_res = max_resize_res
- self.crop_res = crop_res
- self.flip_prob = flip_prob
- self.instruct = instruct
-
- zip_list = []
- for i in range(zip_start_index, zip_end_index):
- name = "shard-"+str(i).zfill(2)+'.zip'
- zip_list.append(os.path.join(self.path, name))
-
- self.image_dataset = MultipleZipManager(zip_list, 'image', sync=True) # sync=True is faster
-
- with open(Path(self.path, "seeds.json")) as f:
- self.seeds = json.load(f)
-
- split_0, split_1 = {
- "train": (0.0, splits[0]),
- "val": (splits[0], splits[0] + splits[1]),
- "test": (splits[0] + splits[1], 1.0),
- }[split]
-
- idx_0 = math.floor(split_0 * len(self.seeds))
- idx_1 = math.floor(split_1 * len(self.seeds))
- self.seeds = self.seeds[idx_0:idx_1]
-
- if max_num_images is not None and max_num_images > 0:
- self.seeds = self.seeds[:min(max_num_images, len(self.seeds))]
-
- # flatten seeds
- self.seeds = [(name, seed) for name, seeds in self.seeds for seed in seeds]
- self.sample_weight = sample_weight
-
- while True:
- try:
- with open('filtered_ids_ip2p.json') as json_file:
- filtered_ids = json.load(json_file)
- break
- except:
- # download json file from url
- if reverse_version:
- os.system('wget https://github.com/TiankaiHang/storage/releases/download/readout/filtered_ids_ip2p.json')
- else:
- os.system("wget https://github.com/TiankaiHang/storage/releases/download/readout/filtered-ip2p-thres5.5-0.5.json -O filtered_ids_ip2p.json")
-
- print("seeds:", len(self.seeds))
- # self.seeds = [seed for seed in self.seeds if seed[1] in filtered_ids]
- # faster
- # self.seeds = list(filter(lambda seed: seed[1] in filtered_ids, self.seeds))
- # to numpy and faster in parallel
- # import pdb; pdb.set_trace()
- _seeds = [f"{a}/{b}" for a, b in self.seeds]
- self.seeds = np.array(self.seeds)
- _seeds = np.array(_seeds)
- self.seeds = self.seeds[np.isin(_seeds, filtered_ids)]
- self.seeds = self.seeds.tolist()
-
- self.return_add_kwargs = kwargs.get("return_add_kwargs", False)
-
- def __len__(self) -> int:
- return int(len(self.seeds) * self.sample_weight)
-
- def __getitem__(self, i: int) -> dict[str, Any]:
- # name, seeds = self.seeds[i]
- if self.sample_weight >= 1:
- i = i % len(self.seeds)
- else:
- remainder = math.ceil(i / self.sample_weight - int(i / self.sample_weight))
- i = int(i / self.sample_weight) + random.randint(0, int(1 / self.sample_weight) - 1 + remainder)
-
- name, seed = self.seeds[i]
- propt_name = name + "/prompt.json"
- if not self.image_dataset.managers[self.image_dataset.mapping[propt_name]]._init:
- self.image_dataset.managers[self.image_dataset.mapping[propt_name]].initialize(close=False)
- # propt_name = name + "/prompt.json"
- byteflow = self.image_dataset.managers[self.image_dataset.mapping[propt_name]].zip_fd.read(propt_name)
- texts = json.loads(byteflow.decode('utf-8'))
- prompt = texts["edit"]
- if self.instruct:
- prompt = "Image Editing: " + prompt
-
- text_input = texts["input"]
- text_output = texts["output"]
-
- # image_0 = Image.open(propt_dir.joinpath(f"{seed}_0.jpg"))
- # image_1 = Image.open(propt_dir.joinpath(f"{seed}_1.jpg"))
- image_0 = self.image_dataset.get(name+f"/{seed}_0.jpg")
- image_1 = self.image_dataset.get(name+f"/{seed}_1.jpg")
-
- reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item()
- image_0 = image_0.resize((reize_res, reize_res), RESAMPLING_METHOD)
- image_1 = image_1.resize((reize_res, reize_res), RESAMPLING_METHOD)
-
- image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w")
- image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w")
-
- crop = torchvision.transforms.RandomCrop(self.crop_res)
- flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob))
- image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2)
-
- if self.return_add_kwargs:
- add_kwargs = dict(
- name=name,
- seed=seed,
- text_input=text_input,
- text_output=text_output,
- )
- else:
- add_kwargs = {}
-
- return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt), **add_kwargs)
-
-
-class GIERDataset(Dataset):
- def __init__(
- self,
- path: str,
- split: str = "train",
- splits: tuple[float, float, float] = (0.9, 0.05, 0.05),
- min_resize_res: int = 256,
- max_resize_res: int = 256,
- crop_res: int = 256,
- flip_prob: float = 0.0,
- zip_start_index: int = 0,
- zip_end_index: int = 30,
- sample_weight: float = 1.0,
- instruct: bool = False,
- ):
- assert split in ("train", "val", "test")
- assert sum(splits) == 1
- self.path = path
- self.min_resize_res = min_resize_res
- self.max_resize_res = max_resize_res
- self.crop_res = crop_res
- self.flip_prob = flip_prob
- self.instruct = instruct
-
- # self.meta = torch.load(Path(self.path, "GIER.json"), map_location="cpu")
- # load json file
- with open(Path(self.path, "GIER_new.json")) as json_file:
- self.meta = json.load(json_file)
-
- print(f"||||||||||||||||||||||||||||| \n Loaded {len(self.meta)} images from json file")
-
- input_does_not_exist = []
- output_does_not_exist = []
- # filter out out images that do not exist
- if not os.path.exists(os.path.join(self.path, "filtered_meta_new.pt")):
- filtered_meta = []
- for i in tqdm(range(len(self.meta))):
- input_path = os.path.join(self.path, "warped", self.meta[i]["input"])
- output_path = os.path.join(self.path, "warped", self.meta[i]["output"])
-
- if not os.path.exists(input_path):
- input_path = os.path.join(self.path, "images", self.meta[i]["input"])
- if not os.path.exists(input_path):
- input_does_not_exist.append(input_path)
-
- if not os.path.exists(output_path):
- output_path = os.path.join(self.path, "images", self.meta[i]["output"])
- if not os.path.exists(output_path):
- output_does_not_exist.append(output_path)
-
- if os.path.exists(input_path) and os.path.exists(output_path):
- filtered_meta.append(
- dict(
- input=input_path,
- output=output_path,
- prompts=self.meta[i]["prompts"],
- )
- )
- else:
- print(f"\n {input_path} or {output_path} does not exist")
- torch.save(filtered_meta, os.path.join(self.path, "filtered_meta_new.pt"))
- else:
- filtered_meta = torch.load(os.path.join(self.path, "filtered_meta_new.pt"), map_location="cpu")
-
- self.meta = filtered_meta
- print(f"||||||||||||||||||||||||||||| \n Filtered {len(self.meta)} images")
- for i in range(len(self.meta)):
- self.meta[i]['input'] = self.meta[i]['input'].replace('/mnt/external/datasets/GIER_editing_data/', self.path)
- self.meta[i]['output'] = self.meta[i]['output'].replace('/mnt/external/datasets/GIER_editing_data/', self.path)
-
- # write input_does_not_exist and output_does_not_exist to file
- with open(Path(self.path, f"input_does_not_exist.txt"), "w") as f:
- for item in input_does_not_exist:
- f.write("%s\n" % item)
- with open(Path(self.path, f"output_does_not_exist.txt"), "w") as f:
- for item in output_does_not_exist:
- f.write("%s\n" % item)
-
- split_0, split_1 = {
- "train": (0.0, splits[0]),
- "val": (splits[0], splits[0] + splits[1]),
- "test": (splits[0] + splits[1], 1.0),
- }[split]
-
- idx_0 = math.floor(split_0 * len(self.meta))
- idx_1 = math.floor(split_1 * len(self.meta))
-
- self.meta = self.meta[idx_0:idx_1]
- self.sample_weight = sample_weight
- print('original GIER', len(self.meta))
-
- def __len__(self) -> int:
- return int(len(self.meta) * self.sample_weight)
-
- def __getitem__(self, i: int) -> dict[str, Any]:
- if self.sample_weight >= 1:
- i = i % len(self.meta)
- else:
- i = int(i / self.sample_weight) + random.randint(0, int(1 / self.sample_weight) - 1)
-
- # prompt = self.meta[i]["prompts"]
- prompt = random.choice(self.meta[i]["prompts"])
- try:
- image_0 = Image.open(self.meta[i]["input"]).convert("RGB")
- image_1 = Image.open(self.meta[i]["output"]).convert("RGB")
- except PIL.UnidentifiedImageError:
- print(f"\n {self.meta[i]['input']} or {self.meta[i]['output']} is not a valid image")
- i = random.randint(0, len(self.meta) - 1)
- return self.__getitem__(i)
-
- reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item()
- image_0 = image_0.resize((reize_res, reize_res), RESAMPLING_METHOD)
- image_1 = image_1.resize((reize_res, reize_res), RESAMPLING_METHOD)
-
- image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w")
- image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w")
-
- crop = torchvision.transforms.RandomCrop(self.crop_res)
- flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob))
- image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2)
-
- if self.instruct:
- prompt = "Image Editing: " + prompt
-
- return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt))
-
-
-class GQAInpaintDataset(Dataset):
- r"""
- shoud download and unzip the data first
-
- ```
- mkdir -p ../datasets
- cd ../datasets
-
- # if file exists, then skip
- if [ ! -f "gqa-inpaint.zip" ]; then
- sudo azcopy copy "https://bingdatawu2.blob.core.windows.net/genrecog/private/t-thang/gqa-inpaint.zip${TOKEN}" .
- unzip gqa-inpaint.zip -d gqa-inpaint > /dev/null
- fi
-
- if [ ! -f "images.zip" ]; then
- sudo azcopy copy "https://bingdatawu2.blob.core.windows.net/genrecog/private/t-thang/images.zip${TOKEN}" .
- unzip images.zip > /dev/null
- fi
- ```
-
- """
- def __init__(self, **kwargs):
- # load from json ../datasets/gqa-inpaint/meta_info.json
- self.path = kwargs.get("path", "../datasets/gqa-inpaint")
- self.instruct = kwargs.get("instruct", False)
- with open(self.path + "/meta_info.json", "r") as f:
- self.meta_info = json.load(f)
-
- self.min_resize_res = kwargs.get("min_resize_res", 256)
- self.max_resize_res = kwargs.get("max_resize_res", 256)
- self.crop_res = kwargs.get("crop_res", 256)
-
- self.flip_prob = kwargs.get("flip_prob", 0.5)
-
- def __len__(self):
- return len(self.meta_info)
-
- def __getitem__(self, i):
- item = self.meta_info[i]
- src_img = Image.open(item["source_image_path"].replace("../datasets", self.path)).convert("RGB")
- tgt_img = Image.open(item["target_image_path"].replace("../datasets/gqa-inpaint", self.path)).convert("RGB")
-
- image_0 = src_img
- image_1 = tgt_img
-
- reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item()
- image_0 = image_0.resize((reize_res, reize_res), RESAMPLING_METHOD)
- image_1 = image_1.resize((reize_res, reize_res), RESAMPLING_METHOD)
- instruction = item["instruction"]
- if self.instruct:
- instruction = "Image Editing: " + instruction
- # return image_0, image_1, instruction
-
- image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w")
- image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w")
-
- crop = torchvision.transforms.RandomCrop(self.crop_res)
- flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob))
- image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2)
-
- return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=instruction))
-
-
-class MagicBrushDataset(Dataset):
- def __init__(
- self,
- path: str,
- split: str = "train",
- splits: tuple[float, float, float] = (0.9, 0.05, 0.05),
- min_resize_res: int = 256,
- max_resize_res: int = 256,
- crop_res: int = 256,
- flip_prob: float = 0.0,
- zip_start_index: int = 0,
- zip_end_index: int = 30,
- len_dataset: int = -1,
- instruct: bool = False,
- sample_weight: float = 1.0,
- ):
- assert split in ("train", "val", "test")
- assert sum(splits) == 1
- self.path = path
- self.min_resize_res = min_resize_res
- self.max_resize_res = max_resize_res
- self.crop_res = crop_res
- self.flip_prob = flip_prob
- self.instruct = instruct
- self.sample_weight = sample_weight
-
- self.meta_path = os.path.join(self.path, "magic_train.json")
- with open(self.meta_path, "r") as f:
- self.meta = json.load(f)
-
- def __len__(self) -> int:
- return int(len(self.meta) * self.sample_weight)
-
- def __getitem__(self, i: int) -> dict[str, Any]:
- if self.sample_weight >= 1:
- i = i % len(self.meta)
- else:
- i = int(i / self.sample_weight) + random.randint(0, int(1 / self.sample_weight) - 1)
-
- item = self.meta[i]
- try:
- image_0 = Image.open(os.path.join(self.path, item["input"])).convert("RGB")
- image_1 = Image.open(os.path.join(self.path, item["edited"])).convert("RGB")
- except (PIL.UnidentifiedImageError, FileNotFoundError):
- print(f"\n {self.path}/{item['input']} or {self.path}/{item['edited']} is not a valid image")
- i = random.randint(0, len(self.meta) - 1)
- return self.__getitem__(i)
- prompt = item["instruction"]
-
- reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item()
- image_0 = image_0.resize((reize_res, reize_res), RESAMPLING_METHOD)
- image_1 = image_1.resize((reize_res, reize_res), RESAMPLING_METHOD)
-
- if self.instruct:
- prompt = "Image Editing: " + prompt
- # return image_0, image_1, prompt
-
- image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w")
- image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w")
-
- crop = torchvision.transforms.RandomCrop(self.crop_res)
- flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob))
- image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2)
-
- return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt))
-
-
-class IEIWDataset(Dataset):
- def __init__(
- self,
- path: str,
- split: str = "train",
- splits: tuple[float, float, float] = (0.9, 0.05, 0.05),
- min_resize_res: int = 256,
- max_resize_res: int = 256,
- crop_res: int = 256,
- flip_prob: float = 0.0,
- zip_start_index: int = 0,
- zip_end_index: int = 30,
- sample_weight: float = 1.0,
- instruct: bool = False,
- ):
- assert split in ("train", "val", "test")
- assert sum(splits) == 1
- self.path = path
- self.min_resize_res = min_resize_res
- self.max_resize_res = max_resize_res
- self.crop_res = crop_res
- self.flip_prob = flip_prob
- self.instruct = instruct
-
- self.meta_path = os.path.join(self.path, "meta_infov1.json")
- with open(self.meta_path, "r") as f:
- self.meta = json.load(f)
- self.sample_weight = sample_weight
- print('original synthetic', len(self.meta))
-
- def __len__(self) -> int:
- return int(len(self.meta) * self.sample_weight)
-
- def __getitem__(self, i: int) -> dict[str, Any]:
- if self.sample_weight >= 1:
- i = i % len(self.meta)
- else:
- i = int(i / self.sample_weight) + random.randint(0, int(1 / self.sample_weight) - 1)
-
- item = self.meta[i]
- item['input'] = item['input'].replace('/mnt/external/tmp/2023/06/11/', self.path)
- item['edited'] = item['edited'].replace('/mnt/external/tmp/2023/06/11/', self.path)
- try:
- image_0 = Image.open(item["input"]).convert("RGB")
- image_1 = Image.open(item["edited"]).convert("RGB")
- except (PIL.UnidentifiedImageError, FileNotFoundError):
- print(f"\n {item['input']} or {item['edited']} is not a valid image")
- i = random.randint(0, len(self.meta) - 1)
- return self.__getitem__(i)
- prompt = item["instruction"]
-
- reize_res = torch.randint(self.min_resize_res, self.max_resize_res + 1, ()).item()
- image_0 = image_0.resize((reize_res, reize_res), RESAMPLING_METHOD)
- image_1 = image_1.resize((reize_res, reize_res), RESAMPLING_METHOD)
- if self.instruct:
- prompt = "Image Editing: " + prompt
- # return image_0, image_1, prompt
-
- image_0 = rearrange(2 * torch.tensor(np.array(image_0)).float() / 255 - 1, "h w c -> c h w")
- image_1 = rearrange(2 * torch.tensor(np.array(image_1)).float() / 255 - 1, "h w c -> c h w")
-
- crop = torchvision.transforms.RandomCrop(self.crop_res)
- flip = torchvision.transforms.RandomHorizontalFlip(float(self.flip_prob))
- image_0, image_1 = flip(crop(torch.cat((image_0, image_1)))).chunk(2)
-
- return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt))
-
-
diff --git a/spaces/Kayson/InstructDiffusion/scripts/download_pretrained_sd.sh b/spaces/Kayson/InstructDiffusion/scripts/download_pretrained_sd.sh
deleted file mode 100644
index 189105fecca79403ebb6439368e65dc00b6321ab..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/scripts/download_pretrained_sd.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/bin/bash
-
-SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
-
-mkdir -p $SCRIPT_DIR/../stable_diffusion/models/ldm/stable-diffusion-v1
-curl -L https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -o $SCRIPT_DIR/../stable_diffusion/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
-curl -L https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt -o $SCRIPT_DIR/../stable_diffusion/models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/components/types.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/components/types.py
deleted file mode 100644
index 125809a81b306ddeab4cf6ab0ba6abdbe8d0c4ed..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/mkgui/base/components/types.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import base64
-from typing import Any, Dict, overload
-
-
-class FileContent(str):
- def as_bytes(self) -> bytes:
- return base64.b64decode(self, validate=True)
-
- def as_str(self) -> str:
- return self.as_bytes().decode()
-
- @classmethod
- def __modify_schema__(cls, field_schema: Dict[str, Any]) -> None:
- field_schema.update(format="byte")
-
- @classmethod
- def __get_validators__(cls) -> Any: # type: ignore
- yield cls.validate
-
- @classmethod
- def validate(cls, value: Any) -> "FileContent":
- if isinstance(value, FileContent):
- return value
- elif isinstance(value, str):
- return FileContent(value)
- elif isinstance(value, (bytes, bytearray, memoryview)):
- return FileContent(base64.b64encode(value).decode())
- else:
- raise Exception("Wrong type")
-
-# # 暂时无法使用,因为浏览器中没有考虑选择文件夹
-# class DirectoryContent(FileContent):
-# @classmethod
-# def __modify_schema__(cls, field_schema: Dict[str, Any]) -> None:
-# field_schema.update(format="path")
-
-# @classmethod
-# def validate(cls, value: Any) -> "DirectoryContent":
-# if isinstance(value, DirectoryContent):
-# return value
-# elif isinstance(value, str):
-# return DirectoryContent(value)
-# elif isinstance(value, (bytes, bytearray, memoryview)):
-# return DirectoryContent(base64.b64encode(value).decode())
-# else:
-# raise Exception("Wrong type")
diff --git a/spaces/Kirihasan/rvc-jjjo/infer_pack/commons.py b/spaces/Kirihasan/rvc-jjjo/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/Kirihasan/rvc-jjjo/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/Lamai/LAMAIGPT/autogpt/app.py b/spaces/Lamai/LAMAIGPT/autogpt/app.py
deleted file mode 100644
index 58d9f7164ddfbb5019b072d789dc2fa6205dc9d3..0000000000000000000000000000000000000000
--- a/spaces/Lamai/LAMAIGPT/autogpt/app.py
+++ /dev/null
@@ -1,330 +0,0 @@
-""" Command and Control """
-import json
-from typing import Dict, List, NoReturn, Union
-
-from autogpt.agent.agent_manager import AgentManager
-from autogpt.commands.analyze_code import analyze_code
-from autogpt.commands.audio_text import read_audio_from_file
-from autogpt.commands.execute_code import (
- execute_python_file,
- execute_shell,
- execute_shell_popen,
-)
-from autogpt.commands.file_operations import (
- append_to_file,
- delete_file,
- download_file,
- read_file,
- search_files,
- write_to_file,
-)
-from autogpt.commands.git_operations import clone_repository
-from autogpt.commands.google_search import google_official_search, google_search
-from autogpt.commands.image_gen import generate_image
-from autogpt.commands.improve_code import improve_code
-from autogpt.commands.twitter import send_tweet
-from autogpt.commands.web_requests import scrape_links, scrape_text
-from autogpt.commands.web_selenium import browse_website
-from autogpt.commands.write_tests import write_tests
-from autogpt.config import Config
-from autogpt.json_utils.json_fix_llm import fix_and_parse_json
-from autogpt.memory import get_memory
-from autogpt.processing.text import summarize_text
-from autogpt.speech import say_text
-
-CFG = Config()
-AGENT_MANAGER = AgentManager()
-
-
-def is_valid_int(value: str) -> bool:
- """Check if the value is a valid integer
-
- Args:
- value (str): The value to check
-
- Returns:
- bool: True if the value is a valid integer, False otherwise
- """
- try:
- int(value)
- return True
- except ValueError:
- return False
-
-
-def get_command(response_json: Dict):
- """Parse the response and return the command name and arguments
-
- Args:
- response_json (json): The response from the AI
-
- Returns:
- tuple: The command name and arguments
-
- Raises:
- json.decoder.JSONDecodeError: If the response is not valid JSON
-
- Exception: If any other error occurs
- """
- try:
- if "command" not in response_json:
- return "Error:", "Missing 'command' object in JSON"
-
- if not isinstance(response_json, dict):
- return "Error:", f"'response_json' object is not dictionary {response_json}"
-
- command = response_json["command"]
- if not isinstance(command, dict):
- return "Error:", "'command' object is not a dictionary"
-
- if "name" not in command:
- return "Error:", "Missing 'name' field in 'command' object"
-
- command_name = command["name"]
-
- # Use an empty dictionary if 'args' field is not present in 'command' object
- arguments = command.get("args", {})
-
- return command_name, arguments
- except json.decoder.JSONDecodeError:
- return "Error:", "Invalid JSON"
- # All other errors, return "Error: + error message"
- except Exception as e:
- return "Error:", str(e)
-
-
-def map_command_synonyms(command_name: str):
- """Takes the original command name given by the AI, and checks if the
- string matches a list of common/known hallucinations
- """
- synonyms = [
- ("write_file", "write_to_file"),
- ("create_file", "write_to_file"),
- ("search", "google"),
- ]
- for seen_command, actual_command_name in synonyms:
- if command_name == seen_command:
- return actual_command_name
- return command_name
-
-
-def execute_command(command_name: str, arguments):
- """Execute the command and return the result
-
- Args:
- command_name (str): The name of the command to execute
- arguments (dict): The arguments for the command
-
- Returns:
- str: The result of the command
- """
- try:
- command_name = map_command_synonyms(command_name.lower())
- if command_name == "google":
- # Check if the Google API key is set and use the official search method
- # If the API key is not set or has only whitespaces, use the unofficial
- # search method
- key = CFG.google_api_key
- if key and key.strip() and key != "your-google-api-key":
- google_result = google_official_search(arguments["input"])
- return google_result
- else:
- google_result = google_search(arguments["input"])
-
- # google_result can be a list or a string depending on the search results
- if isinstance(google_result, list):
- safe_message = [
- google_result_single.encode("utf-8", "ignore")
- for google_result_single in google_result
- ]
- else:
- safe_message = google_result.encode("utf-8", "ignore")
-
- return safe_message.decode("utf-8")
- elif command_name == "memory_add":
- memory = get_memory(CFG)
- return memory.add(arguments["string"])
- elif command_name == "start_agent":
- return start_agent(
- arguments["name"], arguments["task"], arguments["prompt"]
- )
- elif command_name == "message_agent":
- return message_agent(arguments["key"], arguments["message"])
- elif command_name == "list_agents":
- return list_agents()
- elif command_name == "delete_agent":
- return delete_agent(arguments["key"])
- elif command_name == "get_text_summary":
- return get_text_summary(arguments["url"], arguments["question"])
- elif command_name == "get_hyperlinks":
- return get_hyperlinks(arguments["url"])
- elif command_name == "clone_repository":
- return clone_repository(
- arguments["repository_url"], arguments["clone_path"]
- )
- elif command_name == "read_file":
- return read_file(arguments["file"])
- elif command_name == "write_to_file":
- return write_to_file(arguments["file"], arguments["text"])
- elif command_name == "append_to_file":
- return append_to_file(arguments["file"], arguments["text"])
- elif command_name == "delete_file":
- return delete_file(arguments["file"])
- elif command_name == "search_files":
- return search_files(arguments["directory"])
- elif command_name == "download_file":
- if not CFG.allow_downloads:
- return "Error: You do not have user authorization to download files locally."
- return download_file(arguments["url"], arguments["file"])
- elif command_name == "browse_website":
- return browse_website(arguments["url"], arguments["question"])
- # TODO: Change these to take in a file rather than pasted code, if
- # non-file is given, return instructions "Input should be a python
- # filepath, write your code to file and try again"
- elif command_name == "analyze_code":
- return analyze_code(arguments["code"])
- elif command_name == "improve_code":
- return improve_code(arguments["suggestions"], arguments["code"])
- elif command_name == "write_tests":
- return write_tests(arguments["code"], arguments.get("focus"))
- elif command_name == "execute_python_file": # Add this command
- return execute_python_file(arguments["file"])
- elif command_name == "execute_shell":
- if CFG.execute_local_commands:
- return execute_shell(arguments["command_line"])
- else:
- return (
- "You are not allowed to run local shell commands. To execute"
- " shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' "
- "in your config. Do not attempt to bypass the restriction."
- )
- elif command_name == "execute_shell_popen":
- if CFG.execute_local_commands:
- return execute_shell_popen(arguments["command_line"])
- else:
- return (
- "You are not allowed to run local shell commands. To execute"
- " shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' "
- "in your config. Do not attempt to bypass the restriction."
- )
- elif command_name == "read_audio_from_file":
- return read_audio_from_file(arguments["file"])
- elif command_name == "generate_image":
- return generate_image(arguments["prompt"])
- elif command_name == "send_tweet":
- return send_tweet(arguments["text"])
- elif command_name == "do_nothing":
- return "No action performed."
- elif command_name == "task_complete":
- shutdown()
- else:
- return (
- f"Unknown command '{command_name}'. Please refer to the 'COMMANDS'"
- " list for available commands and only respond in the specified JSON"
- " format."
- )
- except Exception as e:
- return f"Error: {str(e)}"
-
-
-def get_text_summary(url: str, question: str) -> str:
- """Return the results of a Google search
-
- Args:
- url (str): The url to scrape
- question (str): The question to summarize the text for
-
- Returns:
- str: The summary of the text
- """
- text = scrape_text(url)
- summary = summarize_text(url, text, question)
- return f""" "Result" : {summary}"""
-
-
-def get_hyperlinks(url: str) -> Union[str, List[str]]:
- """Return the results of a Google search
-
- Args:
- url (str): The url to scrape
-
- Returns:
- str or list: The hyperlinks on the page
- """
- return scrape_links(url)
-
-
-def shutdown() -> NoReturn:
- """Shut down the program"""
- print("Shutting down...")
- quit()
-
-
-def start_agent(name: str, task: str, prompt: str, model=CFG.fast_llm_model) -> str:
- """Start an agent with a given name, task, and prompt
-
- Args:
- name (str): The name of the agent
- task (str): The task of the agent
- prompt (str): The prompt for the agent
- model (str): The model to use for the agent
-
- Returns:
- str: The response of the agent
- """
- # Remove underscores from name
- voice_name = name.replace("_", " ")
-
- first_message = f"""You are {name}. Respond with: "Acknowledged"."""
- agent_intro = f"{voice_name} here, Reporting for duty!"
-
- # Create agent
- if CFG.speak_mode:
- say_text(agent_intro, 1)
- key, ack = AGENT_MANAGER.create_agent(task, first_message, model)
-
- if CFG.speak_mode:
- say_text(f"Hello {voice_name}. Your task is as follows. {task}.")
-
- # Assign task (prompt), get response
- agent_response = AGENT_MANAGER.message_agent(key, prompt)
-
- return f"Agent {name} created with key {key}. First response: {agent_response}"
-
-
-def message_agent(key: str, message: str) -> str:
- """Message an agent with a given key and message"""
- # Check if the key is a valid integer
- if is_valid_int(key):
- agent_response = AGENT_MANAGER.message_agent(int(key), message)
- else:
- return "Invalid key, must be an integer."
-
- # Speak response
- if CFG.speak_mode:
- say_text(agent_response, 1)
- return agent_response
-
-
-def list_agents():
- """List all agents
-
- Returns:
- str: A list of all agents
- """
- return "List of agents:\n" + "\n".join(
- [str(x[0]) + ": " + x[1] for x in AGENT_MANAGER.list_agents()]
- )
-
-
-def delete_agent(key: str) -> str:
- """Delete an agent with a given key
-
- Args:
- key (str): The key of the agent to delete
-
- Returns:
- str: A message indicating whether the agent was deleted or not
- """
- result = AGENT_MANAGER.delete_agent(key)
- return f"Agent {key} deleted." if result else f"Agent {key} does not exist."
diff --git a/spaces/LanguageBind/LanguageBind/v_cls/random_erasing.py b/spaces/LanguageBind/LanguageBind/v_cls/random_erasing.py
deleted file mode 100644
index 73c10742a51f1f38c1f665283747f2629c3fcb00..0000000000000000000000000000000000000000
--- a/spaces/LanguageBind/LanguageBind/v_cls/random_erasing.py
+++ /dev/null
@@ -1,175 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-"""
-This implementation is based on
-https://github.com/rwightman/pytorch-image-models/blob/master/timm/data/random_erasing.py
-pulished under an Apache License 2.0.
-
-COMMENT FROM ORIGINAL:
-Originally inspired by impl at https://github.com/zhunzhong07/Random-Erasing, Apache 2.0
-Copyright Zhun Zhong & Liang Zheng
-Hacked together by / Copyright 2020 Ross Wightman
-"""
-import math
-import random
-
-import torch
-
-
-def _get_pixels(per_pixel,
- rand_color,
- patch_size,
- dtype=torch.float32,
- device="cuda"):
- # NOTE I've seen CUDA illegal memory access errors being caused by the normal_()
- # paths, flip the order so normal is run on CPU if this becomes a problem
- # Issue has been fixed in master https://github.com/pytorch/pytorch/issues/19508
- if per_pixel:
- return torch.empty(patch_size, dtype=dtype, device=device).normal_()
- elif rand_color:
- return torch.empty((patch_size[0], 1, 1), dtype=dtype,
- device=device).normal_()
- else:
- return torch.zeros((patch_size[0], 1, 1), dtype=dtype, device=device)
-
-
-class RandomErasing:
- """Randomly selects a rectangle region in an image and erases its pixels.
- 'Random Erasing Data Augmentation' by Zhong et al.
- See https://arxiv.org/pdf/1708.04896.pdf
- This variant of RandomErasing is intended to be applied to either a batch
- or single image tensor after it has been normalized by dataset mean and std.
- Args:
- probability: Probability that the Random Erasing operation will be performed.
- min_area: Minimum percentage of erased area wrt input image area.
- max_area: Maximum percentage of erased area wrt input image area.
- min_aspect: Minimum aspect ratio of erased area.
- mode: pixel color mode, one of 'const', 'rand', or 'pixel'
- 'const' - erase block is constant color of 0 for all channels
- 'rand' - erase block is same per-channel random (normal) color
- 'pixel' - erase block is per-pixel random (normal) color
- max_count: maximum number of erasing blocks per image, area per box is scaled by count.
- per-image count is randomly chosen between 1 and this value.
- """
-
- def __init__(
- self,
- probability=0.5,
- min_area=0.02,
- max_area=1 / 3,
- min_aspect=0.3,
- max_aspect=None,
- mode="const",
- min_count=1,
- max_count=None,
- num_splits=0,
- device="cuda",
- cube=True,
- ):
- self.probability = probability
- self.min_area = min_area
- self.max_area = max_area
- max_aspect = max_aspect or 1 / min_aspect
- self.log_aspect_ratio = (math.log(min_aspect), math.log(max_aspect))
- self.min_count = min_count
- self.max_count = max_count or min_count
- self.num_splits = num_splits
- mode = mode.lower()
- self.rand_color = False
- self.per_pixel = False
- self.cube = cube
- if mode == "rand":
- self.rand_color = True # per block random normal
- elif mode == "pixel":
- self.per_pixel = True # per pixel random normal
- else:
- assert not mode or mode == "const"
- self.device = device
-
- def _erase(self, img, chan, img_h, img_w, dtype):
- if random.random() > self.probability:
- return
- area = img_h * img_w
- count = (
- self.min_count if self.min_count == self.max_count else
- random.randint(self.min_count, self.max_count))
- for _ in range(count):
- for _ in range(10):
- target_area = (
- random.uniform(self.min_area, self.max_area) * area /
- count)
- aspect_ratio = math.exp(random.uniform(*self.log_aspect_ratio))
- h = int(round(math.sqrt(target_area * aspect_ratio)))
- w = int(round(math.sqrt(target_area / aspect_ratio)))
- if w < img_w and h < img_h:
- top = random.randint(0, img_h - h)
- left = random.randint(0, img_w - w)
- img[:, top:top + h, left:left + w] = _get_pixels(
- self.per_pixel,
- self.rand_color,
- (chan, h, w),
- dtype=dtype,
- device=self.device,
- )
- break
-
- def _erase_cube(
- self,
- img,
- batch_start,
- batch_size,
- chan,
- img_h,
- img_w,
- dtype,
- ):
- if random.random() > self.probability:
- return
- area = img_h * img_w
- count = (
- self.min_count if self.min_count == self.max_count else
- random.randint(self.min_count, self.max_count))
- for _ in range(count):
- for _ in range(100):
- target_area = (
- random.uniform(self.min_area, self.max_area) * area /
- count)
- aspect_ratio = math.exp(random.uniform(*self.log_aspect_ratio))
- h = int(round(math.sqrt(target_area * aspect_ratio)))
- w = int(round(math.sqrt(target_area / aspect_ratio)))
- if w < img_w and h < img_h:
- top = random.randint(0, img_h - h)
- left = random.randint(0, img_w - w)
- for i in range(batch_start, batch_size):
- img_instance = img[i]
- img_instance[:, top:top + h,
- left:left + w] = _get_pixels(
- self.per_pixel,
- self.rand_color,
- (chan, h, w),
- dtype=dtype,
- device=self.device,
- )
- break
-
- def __call__(self, input):
- if len(input.size()) == 3:
- self._erase(input, *input.size(), input.dtype)
- else:
- batch_size, chan, img_h, img_w = input.size()
- # skip first slice of batch if num_splits is set (for clean portion of samples)
- batch_start = (
- batch_size // self.num_splits if self.num_splits > 1 else 0)
- if self.cube:
- self._erase_cube(
- input,
- batch_start,
- batch_size,
- chan,
- img_h,
- img_w,
- input.dtype,
- )
- else:
- for i in range(batch_start, batch_size):
- self._erase(input[i], chan, img_h, img_w, input.dtype)
- return input
diff --git a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/download_models.py b/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/download_models.py
deleted file mode 100644
index 0df2477e4c465eb234bde7501127d2ce2b53f56e..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/download_models.py
+++ /dev/null
@@ -1,31 +0,0 @@
-from pathlib import Path
-import requests
-
-MDX_DOWNLOAD_LINK = 'https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/'
-RVC_DOWNLOAD_LINK = 'https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/'
-
-BASE_DIR = Path(__file__).resolve().parent.parent
-mdxnet_models_dir = BASE_DIR / 'mdxnet_models'
-rvc_models_dir = BASE_DIR / 'rvc_models'
-
-
-def dl_model(link, model_name, dir_name):
- with requests.get(f'{link}{model_name}') as r:
- r.raise_for_status()
- with open(dir_name / model_name, 'wb') as f:
- for chunk in r.iter_content(chunk_size=8192):
- f.write(chunk)
-
-
-if __name__ == '__main__':
- mdx_model_names = ['UVR-MDX-NET-Voc_FT.onnx', 'UVR_MDXNET_KARA_2.onnx', 'Reverb_HQ_By_FoxJoy.onnx']
- for model in mdx_model_names:
- print(f'Downloading {model}...')
- dl_model(MDX_DOWNLOAD_LINK, model, mdxnet_models_dir)
-
- rvc_model_names = ['hubert_base.pt', 'rmvpe.pt']
- for model in rvc_model_names:
- print(f'Downloading {model}...')
- dl_model(RVC_DOWNLOAD_LINK, model, rvc_models_dir)
-
- print('All models downloaded!')
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/tabs/resources.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/tabs/resources.py
deleted file mode 100644
index 972934c630c35b6b7a7b975e52e0f125f5e6bc19..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/tabs/resources.py
+++ /dev/null
@@ -1,1646 +0,0 @@
-import subprocess
-import os
-import sys
-import gdown
-import errno
-import shutil
-import yt_dlp
-import datetime
-import torch
-import glob
-import gradio as gr
-import traceback
-import lib.infer.infer_libs.uvr5_pack.mdx as mdx
-from lib.infer.modules.uvr5.mdxprocess import (
- get_model_list,
- id_to_ptm,
- prepare_mdx,
- run_mdx,
-)
-import requests
-import wget
-import ffmpeg
-import hashlib
-current_script_path = os.path.abspath(__file__)
-script_parent_directory = os.path.dirname(current_script_path)
-now_dir = os.path.dirname(script_parent_directory)
-sys.path.append(now_dir)
-import re
-from lib.infer.modules.vc.pipeline import Pipeline
-
-VC = Pipeline
-from lib.infer.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-
-from assets.configs.config import Config
-from lib.infer.modules.uvr5.mdxnet import MDXNetDereverb
-from lib.infer.modules.uvr5.preprocess import AudioPre, AudioPreDeEcho
-from assets.i18n.i18n import I18nAuto
-
-i18n = I18nAuto()
-from bs4 import BeautifulSoup
-from dotenv import load_dotenv
-
-load_dotenv()
-config = Config()
-
-weight_root = os.getenv("weight_root")
-weight_uvr5_root = os.getenv("weight_uvr5_root")
-index_root = os.getenv("index_root")
-audio_root = "assets/audios"
-names = [
- os.path.join(root, file)
- for root, _, files in os.walk(weight_root)
- for file in files
- if file.endswith((".pth", ".onnx"))
-]
-
-sup_audioext = {
- "wav",
- "mp3",
- "flac",
- "ogg",
- "opus",
- "m4a",
- "mp4",
- "aac",
- "alac",
- "wma",
- "aiff",
- "webm",
- "ac3",
-}
-audio_paths = [
- os.path.join(root, name)
- for root, _, files in os.walk(audio_root, topdown=False)
- for name in files
- if name.endswith(tuple(sup_audioext)) and root == audio_root
-]
-
-
-uvr5_names = [
- name.replace(".pth", "")
- for name in os.listdir(weight_uvr5_root)
- if name.endswith(".pth") or "onnx" in name
-]
-
-
-def calculate_md5(file_path):
- hash_md5 = hashlib.md5()
- with open(file_path, "rb") as f:
- for chunk in iter(lambda: f.read(4096), b""):
- hash_md5.update(chunk)
- return hash_md5.hexdigest()
-import unicodedata
-
-def format_title(title):
- formatted_title = unicodedata.normalize('NFKD', title).encode('ascii', 'ignore').decode('utf-8')
- formatted_title = re.sub(r'[\u2500-\u257F]+', '', title)
- formatted_title = re.sub(r'[^\w\s-]', '', title)
- formatted_title = re.sub(r'\s+', '_', formatted_title)
- return formatted_title
-
-
-def silentremove(filename):
- try:
- os.remove(filename)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
-
-
-def get_md5(temp_folder):
- for root, subfolders, files in os.walk(temp_folder):
- for file in files:
- if (
- not file.startswith("G_")
- and not file.startswith("D_")
- and file.endswith(".pth")
- and not "_G_" in file
- and not "_D_" in file
- ):
- md5_hash = calculate_md5(os.path.join(root, file))
- return md5_hash
-
- return None
-
-
-def find_parent(search_dir, file_name):
- for dirpath, dirnames, filenames in os.walk(search_dir):
- if file_name in filenames:
- return os.path.abspath(dirpath)
- return None
-
-
-def find_folder_parent(search_dir, folder_name):
- for dirpath, dirnames, filenames in os.walk(search_dir):
- if folder_name in dirnames:
- return os.path.abspath(dirpath)
- return None
-
-file_path = find_folder_parent(now_dir, "assets")
-tmp = os.path.join(file_path, "temp")
-shutil.rmtree(tmp, ignore_errors=True)
-os.environ["temp"] = tmp
-
-def get_mediafire_download_link(url):
- response = requests.get(url)
- response.raise_for_status()
- soup = BeautifulSoup(response.text, 'html.parser')
- download_button = soup.find('a', {'class': 'input popsok', 'aria-label': 'Download file'})
- if download_button:
- download_link = download_button.get('href')
- return download_link
- else:
- return None
-
-def delete_large_files(directory_path, max_size_megabytes):
- for filename in os.listdir(directory_path):
- file_path = os.path.join(directory_path, filename)
- if os.path.isfile(file_path):
- size_in_bytes = os.path.getsize(file_path)
- size_in_megabytes = size_in_bytes / (1024 * 1024) # Convert bytes to megabytes
-
- if size_in_megabytes > max_size_megabytes:
- print("###################################")
- print(f"Deleting s*** {filename} (Size: {size_in_megabytes:.2f} MB)")
- os.remove(file_path)
- print("###################################")
-
-def download_from_url(url):
- file_path = find_folder_parent(now_dir, "assets")
- print(file_path)
- zips_path = os.path.join(file_path, "assets", "zips")
- print(zips_path)
- os.makedirs(zips_path, exist_ok=True)
- print(f"Limit download size in MB {os.getenv('MAX_DOWNLOAD_SIZE')}, duplicate the space for modify the limit")
-
- if url != "":
- print(i18n("Downloading the file: ") + f"{url}")
- if "drive.google.com" in url:
- if "file/d/" in url:
- file_id = url.split("file/d/")[1].split("/")[0]
- elif "id=" in url:
- file_id = url.split("id=")[1].split("&")[0]
- else:
- return None
-
- if file_id:
- os.chdir(zips_path)
- try:
- gdown.download(f"https://drive.google.com/uc?id={file_id}", quiet=False, fuzzy=True)
- except Exception as e:
- error_message = str(e)
- if "Too many users have viewed or downloaded this file recently" in error_message:
- os.chdir(file_path)
- return "too much use"
- elif "Cannot retrieve the public link of the file." in error_message:
- os.chdir(file_path)
- return "private link"
- else:
- print(error_message)
- os.chdir(file_path)
- return None
-
- elif "/blob/" in url or "/resolve/" in url:
- os.chdir(zips_path)
- if "/blob/" in url:
- url = url.replace("/blob/", "/resolve/")
-
- response = requests.get(url, stream=True)
- if response.status_code == 200:
- file_name = url.split("/")[-1]
- file_name = file_name.replace("%20", "_")
- total_size_in_bytes = int(response.headers.get('content-length', 0))
- block_size = 1024 # 1 Kibibyte
- progress_bar_length = 50
- progress = 0
- with open(os.path.join(zips_path, file_name), 'wb') as file:
- for data in response.iter_content(block_size):
- file.write(data)
- progress += len(data)
- progress_percent = int((progress / total_size_in_bytes) * 100)
- num_dots = int((progress / total_size_in_bytes) * progress_bar_length)
- progress_bar = "[" + "." * num_dots + " " * (progress_bar_length - num_dots) + "]"
- #print(f"{progress_percent}% {progress_bar} {progress}/{total_size_in_bytes} ", end="\r")
- if progress_percent == 100:
- print("\n")
- else:
- os.chdir(file_path)
- return None
- elif "mega.nz" in url:
- if "#!" in url:
- file_id = url.split("#!")[1].split("!")[0]
- elif "file/" in url:
- file_id = url.split("file/")[1].split("/")[0]
- else:
- return None
- if file_id:
- print("Mega.nz is unsupported due mega.py deprecation")
- elif "/tree/main" in url:
- response = requests.get(url)
- soup = BeautifulSoup(response.content, "html.parser")
- temp_url = ""
- for link in soup.find_all("a", href=True):
- if link["href"].endswith(".zip"):
- temp_url = link["href"]
- break
- if temp_url:
- url = temp_url
- url = url.replace("blob", "resolve")
- if "huggingface.co" not in url:
- url = "https://huggingface.co" + url
-
- wget.download(url)
- else:
- print("No .zip file found on the page.")
- elif "cdn.discordapp.com" in url:
- file = requests.get(url)
- os.chdir("./assets/zips")
- if file.status_code == 200:
- name = url.split("/")
- with open(
- os.path.join(name[-1]), "wb"
- ) as newfile:
- newfile.write(file.content)
- else:
- return None
- elif "pixeldrain.com" in url:
- try:
- file_id = url.split("pixeldrain.com/u/")[1]
- os.chdir(zips_path)
- print(file_id)
- response = requests.get(f"https://pixeldrain.com/api/file/{file_id}")
- if response.status_code == 200:
- file_name = (
- response.headers.get("Content-Disposition")
- .split("filename=")[-1]
- .strip('";')
- )
- os.makedirs(zips_path, exist_ok=True)
- with open(os.path.join(zips_path, file_name), "wb") as newfile:
- newfile.write(response.content)
- os.chdir(file_path)
- return "downloaded"
- else:
- os.chdir(file_path)
- return None
- except Exception as e:
- print(e)
- os.chdir(file_path)
- return None
- elif "mediafire.com" in url:
- download_link = get_mediafire_download_link(url)
- if download_link:
- os.chdir(zips_path)
- wget.download(download_link)
- else:
- return None
- # elif "www.weights.gg" in url:
- # #Pls weights creator dont fix this because yes. c:
- # url_parts = url.split("/")
- # weights_gg_index = url_parts.index("www.weights.gg")
- # if weights_gg_index != -1 and weights_gg_index < len(url_parts) - 1:
- # model_part = "/".join(url_parts[weights_gg_index + 1:])
- # if "models" in model_part:
- # model_part = model_part.split("models/")[-1]
- # print(model_part)
- # if model_part:
- # download_url = f"https://www.weights.gg/es/models/{model_part}"
- # response = requests.get(download_url)
- # if response.status_code == 200:
- # soup = BeautifulSoup(response.text, "html.parser")
- # button_link = soup.find("a", class_="bg-black text-white px-3 py-2 rounded-lg flex items-center gap-1")
- # if button_link:
- # download_link = button_link["href"]
- # result = download_from_url(download_link)
- # if result == "downloaded":
- # return "downloaded"
- # else:
- # return None
- # else:
- # return None
- # else:
- # return None
- # else:
- # return None
- # else:
- # return None
- # else:
- # return None
- else:
- try:
- os.chdir(zips_path)
- wget.download(url)
- except Exception as e:
- os.chdir(file_path)
- print(e)
- return None
-
-
- # Fix points in the zips
- for currentPath, _, zipFiles in os.walk(zips_path):
- for Files in zipFiles:
- filePart = Files.split(".")
- extensionFile = filePart[len(filePart) - 1]
- filePart.pop()
- nameFile = "_".join(filePart)
- realPath = os.path.join(currentPath, Files)
- os.rename(realPath, nameFile + "." + extensionFile)
-
- delete_large_files(zips_path, int(os.getenv("MAX_DOWNLOAD_SIZE")))
-
- os.chdir(file_path)
- print(i18n("Full download"))
- return "downloaded"
- else:
- return None
-
-
-class error_message(Exception):
- def __init__(self, mensaje):
- self.mensaje = mensaje
- super().__init__(mensaje)
-
-
-def get_vc(sid, to_return_protect0, to_return_protect1):
- global n_spk, tgt_sr, net_g, vc, cpt, version
- if sid == "" or sid == []:
- global hubert_model
- if hubert_model is not None:
- print("clean_empty_cache")
- del net_g, n_spk, vc, hubert_model, tgt_sr
- hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g, cpt
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- cpt = None
- return (
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- )
- person = "%s/%s" % (weight_root, sid)
- print("loading %s" % person)
- cpt = torch.load(person, map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]
- if_f0 = cpt.get("f0", 1)
- if if_f0 == 0:
- to_return_protect0 = to_return_protect1 = {
- "visible": False,
- "value": 0.5,
- "__type__": "update",
- }
- else:
- to_return_protect0 = {
- "visible": True,
- "value": to_return_protect0,
- "__type__": "update",
- }
- to_return_protect1 = {
- "visible": True,
- "value": to_return_protect1,
- "__type__": "update",
- }
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- n_spk = cpt["config"][-3]
- return (
- {"visible": True, "maximum": n_spk, "__type__": "update"},
- to_return_protect0,
- to_return_protect1,
- )
-import zipfile
-from tqdm import tqdm
-
-def extract_and_show_progress(zipfile_path, unzips_path):
- try:
- with zipfile.ZipFile(zipfile_path, 'r') as zip_ref:
- total_files = len(zip_ref.infolist())
- with tqdm(total=total_files, unit='files', ncols= 100, colour= 'green') as pbar:
- for file_info in zip_ref.infolist():
- zip_ref.extract(file_info, unzips_path)
- pbar.update(1)
- return True
- except Exception as e:
- print(f"Error al descomprimir {zipfile_path}: {e}")
- return False
-
-
-def load_downloaded_model(url):
- parent_path = find_folder_parent(now_dir, "assets")
- try:
- infos = []
- zips_path = os.path.join(parent_path, "assets", "zips")
- unzips_path = os.path.join(parent_path, "assets", "unzips")
- weights_path = os.path.join(parent_path, "logs", "weights")
- logs_dir = ""
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(unzips_path):
- shutil.rmtree(unzips_path)
-
- os.mkdir(zips_path)
- os.mkdir(unzips_path)
-
- download_file = download_from_url(url)
- if not download_file:
- print(i18n("The file could not be downloaded."))
- infos.append(i18n("The file could not be downloaded."))
- yield "\n".join(infos)
- elif download_file == "downloaded":
- print(i18n("It has been downloaded successfully."))
- infos.append(i18n("It has been downloaded successfully."))
- yield "\n".join(infos)
- elif download_file == "too much use":
- raise Exception(
- i18n("Too many users have recently viewed or downloaded this file")
- )
- elif download_file == "private link":
- raise Exception(i18n("Cannot get file from this private link"))
-
- for filename in os.listdir(zips_path):
- if filename.endswith(".zip"):
- zipfile_path = os.path.join(zips_path, filename)
- print(i18n("Proceeding with the extraction..."))
- infos.append(i18n("Proceeding with the extraction..."))
- #shutil.unpack_archive(zipfile_path, unzips_path, "zip")
- model_name = os.path.basename(zipfile_path)
- logs_dir = os.path.join(
- parent_path,
- "logs",
- os.path.normpath(str(model_name).replace(".zip", "")),
- )
-
- yield "\n".join(infos)
- success = extract_and_show_progress(zipfile_path, unzips_path)
- if success:
- yield f"Extracción exitosa: {model_name}"
- else:
- yield f"Fallo en la extracción: {model_name}"
- yield "\n".join(infos)
- else:
- print(i18n("Unzip error."))
- infos.append(i18n("Unzip error."))
- yield "\n".join(infos)
- return ""
-
- index_file = False
- model_file = False
-
- for path, subdirs, files in os.walk(unzips_path):
- for item in files:
- item_path = os.path.join(path, item)
- if not "G_" in item and not "D_" in item and item.endswith(".pth"):
- model_file = True
- model_name = item.replace(".pth", "")
- logs_dir = os.path.join(parent_path, "logs", model_name)
- if os.path.exists(logs_dir):
- shutil.rmtree(logs_dir)
- os.mkdir(logs_dir)
- if not os.path.exists(weights_path):
- os.mkdir(weights_path)
- if os.path.exists(os.path.join(weights_path, item)):
- os.remove(os.path.join(weights_path, item))
- if os.path.exists(item_path):
- shutil.move(item_path, weights_path)
-
- if not model_file and not os.path.exists(logs_dir):
- os.mkdir(logs_dir)
- for path, subdirs, files in os.walk(unzips_path):
- for item in files:
- item_path = os.path.join(path, item)
- if item.startswith("added_") and item.endswith(".index"):
- index_file = True
- if os.path.exists(item_path):
- if os.path.exists(os.path.join(logs_dir, item)):
- os.remove(os.path.join(logs_dir, item))
- shutil.move(item_path, logs_dir)
- if item.startswith("total_fea.npy") or item.startswith("events."):
- if os.path.exists(item_path):
- if os.path.exists(os.path.join(logs_dir, item)):
- os.remove(os.path.join(logs_dir, item))
- shutil.move(item_path, logs_dir)
-
- result = ""
- if model_file:
- if index_file:
- print(i18n("The model works for inference, and has the .index file."))
- infos.append(
- "\n"
- + i18n("The model works for inference, and has the .index file.")
- )
- yield "\n".join(infos)
- else:
- print(
- i18n(
- "The model works for inference, but it doesn't have the .index file."
- )
- )
- infos.append(
- "\n"
- + i18n(
- "The model works for inference, but it doesn't have the .index file."
- )
- )
- yield "\n".join(infos)
-
- if not index_file and not model_file:
- print(i18n("No relevant file was found to upload."))
- infos.append(i18n("No relevant file was found to upload."))
- yield "\n".join(infos)
-
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(unzips_path):
- shutil.rmtree(unzips_path)
- os.chdir(parent_path)
- return result
- except Exception as e:
- os.chdir(parent_path)
- if "too much use" in str(e):
- print(i18n("Too many users have recently viewed or downloaded this file"))
- yield i18n("Too many users have recently viewed or downloaded this file")
- elif "private link" in str(e):
- print(i18n("Cannot get file from this private link"))
- yield i18n("Cannot get file from this private link")
- else:
- print(e)
- yield i18n("An error occurred downloading")
- finally:
- os.chdir(parent_path)
-
-
-def load_dowloaded_dataset(url):
- parent_path = find_folder_parent(now_dir, "assets")
- infos = []
- try:
- zips_path = os.path.join(parent_path, "assets", "zips")
- unzips_path = os.path.join(parent_path, "assets", "unzips")
- datasets_path = os.path.join(parent_path, "datasets")
- audio_extenions = [
- "wav",
- "mp3",
- "flac",
- "ogg",
- "opus",
- "m4a",
- "mp4",
- "aac",
- "alac",
- "wma",
- "aiff",
- "webm",
- "ac3",
- ]
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(unzips_path):
- shutil.rmtree(unzips_path)
-
- if not os.path.exists(datasets_path):
- os.mkdir(datasets_path)
-
- os.mkdir(zips_path)
- os.mkdir(unzips_path)
-
- download_file = download_from_url(url)
-
- if not download_file:
- print(i18n("An error occurred downloading"))
- infos.append(i18n("An error occurred downloading"))
- yield "\n".join(infos)
- raise Exception(i18n("An error occurred downloading"))
- elif download_file == "downloaded":
- print(i18n("It has been downloaded successfully."))
- infos.append(i18n("It has been downloaded successfully."))
- yield "\n".join(infos)
- elif download_file == "too much use":
- raise Exception(
- i18n("Too many users have recently viewed or downloaded this file")
- )
- elif download_file == "private link":
- raise Exception(i18n("Cannot get file from this private link"))
-
- zip_path = os.listdir(zips_path)
- foldername = ""
- for file in zip_path:
- if file.endswith(".zip"):
- file_path = os.path.join(zips_path, file)
- print("....")
- foldername = file.replace(".zip", "").replace(" ", "").replace("-", "_")
- dataset_path = os.path.join(datasets_path, foldername)
- print(i18n("Proceeding with the extraction..."))
- infos.append(i18n("Proceeding with the extraction..."))
- yield "\n".join(infos)
- shutil.unpack_archive(file_path, unzips_path, "zip")
- if os.path.exists(dataset_path):
- shutil.rmtree(dataset_path)
-
- os.mkdir(dataset_path)
-
- for root, subfolders, songs in os.walk(unzips_path):
- for song in songs:
- song_path = os.path.join(root, song)
- if song.endswith(tuple(audio_extenions)):
- formatted_song_name = format_title(
- os.path.splitext(song)[0]
- )
- extension = os.path.splitext(song)[1]
- new_song_path = os.path.join(
- dataset_path, f"{formatted_song_name}{extension}"
- )
- shutil.move(song_path, new_song_path)
- else:
- print(i18n("Unzip error."))
- infos.append(i18n("Unzip error."))
- yield "\n".join(infos)
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(unzips_path):
- shutil.rmtree(unzips_path)
-
- print(i18n("The Dataset has been loaded successfully."))
- infos.append(i18n("The Dataset has been loaded successfully."))
- yield "\n".join(infos)
- except Exception as e:
- os.chdir(parent_path)
- if "too much use" in str(e):
- print(i18n("Too many users have recently viewed or downloaded this file"))
- yield i18n("Too many users have recently viewed or downloaded this file")
- elif "private link" in str(e):
- print(i18n("Cannot get file from this private link"))
- yield i18n("Cannot get file from this private link")
- else:
- print(e)
- yield i18n("An error occurred downloading")
- finally:
- os.chdir(parent_path)
-
-
-SAVE_ACTION_CONFIG = {
- i18n("Save all"): {
- 'destination_folder': "manual_backup",
- 'copy_files': True, # "Save all" Copy all files and folders
- 'include_weights': False
- },
- i18n("Save D and G"): {
- 'destination_folder': "manual_backup",
- 'copy_files': False, # "Save D and G" Do not copy everything, only specific files
- 'files_to_copy': ["D_*.pth", "G_*.pth", "added_*.index"],
- 'include_weights': True,
- },
- i18n("Save voice"): {
- 'destination_folder': "finished",
- 'copy_files': False, # "Save voice" Do not copy everything, only specific files
- 'files_to_copy': ["added_*.index"],
- 'include_weights': True,
- },
-}
-
-import os
-import shutil
-import zipfile
-import glob
-import fnmatch
-
-import os
-import shutil
-import zipfile
-import glob
-
-import os
-import shutil
-import zipfile
-
-
-def save_model(modelname, save_action):
- parent_path = find_folder_parent(now_dir, "assets")
- zips_path = os.path.join(parent_path, "assets", "zips")
- dst = os.path.join(zips_path, f"{modelname}.zip")
- logs_path = os.path.join(parent_path, "logs", modelname)
- weights_path = os.path.join(logs_path, "weights")
- save_folder = parent_path
- infos = []
-
- try:
- if not os.path.exists(logs_path):
- raise Exception("No model found.")
-
- if not "content" in parent_path:
- save_folder = os.path.join(parent_path, "logs")
- else:
- save_folder = "/content/drive/MyDrive/RVC_Backup"
-
- infos.append(i18n("Save model"))
- yield "\n".join(infos)
-
- if not os.path.exists(save_folder):
- os.mkdir(save_folder)
- if not os.path.exists(os.path.join(save_folder, "manual_backup")):
- os.mkdir(os.path.join(save_folder, "manual_backup"))
- if not os.path.exists(os.path.join(save_folder, "finished")):
- os.mkdir(os.path.join(save_folder, "finished"))
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
-
- os.mkdir(zips_path)
-
- if save_action == i18n("Choose the method"):
- raise Exception("No method chosen.")
-
- if save_action == i18n("Save all"):
- save_folder = os.path.join(save_folder, "manual_backup")
- elif save_action == i18n("Save D and G"):
- save_folder = os.path.join(save_folder, "manual_backup")
- elif save_action == i18n("Save voice"):
- save_folder = os.path.join(save_folder, "finished")
-
- # Obtain the configuration for the selected save action
- save_action_config = SAVE_ACTION_CONFIG.get(save_action)
-
- if save_action_config is None:
- raise Exception("Invalid save action.")
-
- # Check if we should copy all files
- if save_action_config['copy_files']:
- with zipfile.ZipFile(dst, 'w', zipfile.ZIP_DEFLATED) as zipf:
- for root, dirs, files in os.walk(logs_path):
- for file in files:
- file_path = os.path.join(root, file)
- zipf.write(file_path, os.path.relpath(file_path, logs_path))
- else:
- # Weight file management according to configuration
- if save_action_config['include_weights']:
- if not os.path.exists(weights_path):
- infos.append(i18n("Saved without inference model..."))
- else:
- pth_files = [file for file in os.listdir(weights_path) if file.endswith('.pth')]
- if not pth_files:
- infos.append(i18n("Saved without inference model..."))
- else:
- with zipfile.ZipFile(dst, 'w', zipfile.ZIP_DEFLATED) as zipf:
- skipped_files = set()
- for pth_file in pth_files:
- match = re.search(r'(.*)_s\d+.pth$', pth_file)
- if match:
- base_name = match.group(1)
- if base_name not in skipped_files:
- print(f'Skipping autosave epoch files for {base_name}.')
- skipped_files.add(base_name)
- continue
-
- print(f'Processing file: {pth_file}')
- zipf.write(os.path.join(weights_path, pth_file), arcname=os.path.basename(pth_file))
-
- yield "\n".join(infos)
- infos.append("\n" + i18n("This may take a few minutes, please wait..."))
- yield "\n".join(infos)
-
- # Create a zip file with only the necessary files in the ZIP file
- for pattern in save_action_config.get('files_to_copy', []):
- matching_files = glob.glob(os.path.join(logs_path, pattern))
- with zipfile.ZipFile(dst, 'a', zipfile.ZIP_DEFLATED) as zipf:
- for file_path in matching_files:
- zipf.write(file_path, os.path.basename(file_path))
-
- # Move the ZIP file created to the Save_Folder directory
- shutil.move(dst, os.path.join(save_folder, f"{modelname}.zip"))
-
- shutil.rmtree(zips_path)
- infos.append("\n" + i18n("Model saved successfully"))
- yield "\n".join(infos)
-
- except Exception as e:
- # Handle exceptions and print error messages
- error_message = str(e)
- print(f"Error: {error_message}")
- yield error_message
-
-def load_downloaded_backup(url):
- parent_path = find_folder_parent(now_dir, "assets")
- try:
- infos = []
- logs_folders = [
- "0_gt_wavs",
- "1_16k_wavs",
- "2a_f0",
- "2b-f0nsf",
- "3_feature256",
- "3_feature768",
- ]
- zips_path = os.path.join(parent_path, "assets", "zips")
- unzips_path = os.path.join(parent_path, "assets", "unzips")
- weights_path = os.path.join(parent_path, "assets", "logs", "weights")
- logs_dir = os.path.join(parent_path, "logs")
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(unzips_path):
- shutil.rmtree(unzips_path)
-
- os.mkdir(zips_path)
- os.mkdir(unzips_path)
-
- download_file = download_from_url(url)
- if not download_file:
- print(i18n("The file could not be downloaded."))
- infos.append(i18n("The file could not be downloaded."))
- yield "\n".join(infos)
- elif download_file == "downloaded":
- print(i18n("It has been downloaded successfully."))
- infos.append(i18n("It has been downloaded successfully."))
- yield "\n".join(infos)
- elif download_file == "too much use":
- raise Exception(
- i18n("Too many users have recently viewed or downloaded this file")
- )
- elif download_file == "private link":
- raise Exception(i18n("Cannot get file from this private link"))
-
- for filename in os.listdir(zips_path):
- if filename.endswith(".zip"):
- zipfile_path = os.path.join(zips_path, filename)
- zip_dir_name = os.path.splitext(filename)[0]
- unzip_dir = unzips_path
- print(i18n("Proceeding with the extraction..."))
- infos.append(i18n("Proceeding with the extraction..."))
- shutil.unpack_archive(zipfile_path, unzip_dir, "zip")
-
- if os.path.exists(os.path.join(unzip_dir, zip_dir_name)):
- shutil.move(os.path.join(unzip_dir, zip_dir_name), logs_dir)
- else:
- new_folder_path = os.path.join(logs_dir, zip_dir_name)
- os.mkdir(new_folder_path)
- for item_name in os.listdir(unzip_dir):
- item_path = os.path.join(unzip_dir, item_name)
- if os.path.isfile(item_path):
- shutil.move(item_path, new_folder_path)
- elif os.path.isdir(item_path):
- shutil.move(item_path, new_folder_path)
-
- yield "\n".join(infos)
- else:
- print(i18n("Unzip error."))
- infos.append(i18n("Unzip error."))
- yield "\n".join(infos)
-
- result = ""
-
- for filename in os.listdir(unzips_path):
- if filename.endswith(".zip"):
- silentremove(filename)
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(os.path.join(parent_path, "assets", "unzips")):
- shutil.rmtree(os.path.join(parent_path, "assets", "unzips"))
- print(i18n("The Backup has been uploaded successfully."))
- infos.append("\n" + i18n("The Backup has been uploaded successfully."))
- yield "\n".join(infos)
- os.chdir(parent_path)
- return result
- except Exception as e:
- os.chdir(parent_path)
- if "too much use" in str(e):
- print(i18n("Too many users have recently viewed or downloaded this file"))
- yield i18n("Too many users have recently viewed or downloaded this file")
- elif "private link" in str(e):
- print(i18n("Cannot get file from this private link"))
- yield i18n("Cannot get file from this private link")
- else:
- print(e)
- yield i18n("An error occurred downloading")
- finally:
- os.chdir(parent_path)
-
-
-def save_to_wav(record_button):
- if record_button is None:
- pass
- else:
- path_to_file = record_button
- new_name = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S") + ".wav"
- new_path = ".assets/audios/" + new_name
- shutil.move(path_to_file, new_path)
- return new_name
-
-
-def change_choices2():
- audio_paths = [
- os.path.join(root, name)
- for root, _, files in os.walk(audio_root, topdown=False)
- for name in files
- if name.endswith(tuple(sup_audioext)) and root == audio_root
- ]
- return {"choices": sorted(audio_paths), "__type__": "update"}, {
- "__type__": "update"
- }
-
-
-def uvr(
- input_url,
- output_path,
- model_name,
- inp_root,
- save_root_vocal,
- paths,
- save_root_ins,
- agg,
- format0,
- architecture,
-):
- carpeta_a_eliminar = "yt_downloads"
- if os.path.exists(carpeta_a_eliminar) and os.path.isdir(carpeta_a_eliminar):
- for archivo in os.listdir(carpeta_a_eliminar):
- ruta_archivo = os.path.join(carpeta_a_eliminar, archivo)
- if os.path.isfile(ruta_archivo):
- os.remove(ruta_archivo)
- elif os.path.isdir(ruta_archivo):
- shutil.rmtree(ruta_archivo)
-
- ydl_opts = {
- "no-windows-filenames": True,
- "restrict-filenames": True,
- "extract_audio": True,
- "format": "bestaudio",
- "quiet": True,
- "no-warnings": True,
- }
-
- try:
- print(i18n("Downloading audio from the video..."))
- with yt_dlp.YoutubeDL(ydl_opts) as ydl:
- info_dict = ydl.extract_info(input_url, download=False)
- formatted_title = format_title(info_dict.get("title", "default_title"))
- formatted_outtmpl = output_path + "/" + formatted_title + ".wav"
- ydl_opts["outtmpl"] = formatted_outtmpl
- ydl = yt_dlp.YoutubeDL(ydl_opts)
- ydl.download([input_url])
- print(i18n("Audio downloaded!"))
- except Exception as error:
- print(i18n("An error occurred:"), error)
-
- actual_directory = os.path.dirname(__file__)
- actual_directory = os.path.abspath(os.path.join(actual_directory, ".."))
-
- vocal_directory = os.path.join(actual_directory, save_root_vocal)
- instrumental_directory = os.path.join(actual_directory, save_root_ins)
-
- vocal_formatted = f"vocal_{formatted_title}.wav.reformatted.wav_10.wav"
- instrumental_formatted = f"instrument_{formatted_title}.wav.reformatted.wav_10.wav"
-
- vocal_audio_path = os.path.join(vocal_directory, vocal_formatted)
- instrumental_audio_path = os.path.join(
- instrumental_directory, instrumental_formatted
- )
-
- vocal_formatted_mdx = f"{formatted_title}_vocal_.wav"
- instrumental_formatted_mdx = f"{formatted_title}_instrument_.wav"
-
- vocal_audio_path_mdx = os.path.join(vocal_directory, vocal_formatted_mdx)
- instrumental_audio_path_mdx = os.path.join(
- instrumental_directory, instrumental_formatted_mdx
- )
-
- if architecture == "VR":
- try:
- print(i18n("Starting audio conversion... (This might take a moment)"))
- inp_root = inp_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- save_root_vocal = (
- save_root_vocal.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- )
- save_root_ins = (
- save_root_ins.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- )
- usable_files = [
- os.path.join(inp_root, file)
- for file in os.listdir(inp_root)
- if file.endswith(tuple(sup_audioext))
- ]
- if model_name == "onnx_dereverb_By_FoxJoy":
- pre_fun = MDXNetDereverb(15, config.device)
- else:
- func = AudioPre if "DeEcho" not in model_name else AudioPreDeEcho
- pre_fun = func(
- agg=int(agg),
- model_path=os.path.join(
- os.getenv("weight_uvr5_root"), model_name + ".pth"
- ),
- device=config.device,
- is_half=config.is_half,
- )
- if inp_root != "":
- paths = usable_files
- else:
- paths = [path.name for path in paths]
- for path in paths:
- inp_path = os.path.join(inp_root, path)
- need_reformat = 1
- done = 0
- try:
- info = ffmpeg.probe(inp_path, cmd="ffprobe")
- if (
- info["streams"][0]["channels"] == 2
- and info["streams"][0]["sample_rate"] == "44100"
- ):
- need_reformat = 0
- pre_fun._path_audio_(
- inp_path, save_root_ins, save_root_vocal, format0
- )
- done = 1
- except:
- need_reformat = 1
- traceback.print_exc()
- if need_reformat == 1:
- tmp_path = "%s/%s.reformatted.wav" % (
- os.path.join(os.environ["temp"]),
- os.path.basename(inp_path),
- )
- os.system(
- "ffmpeg -i %s -vn -acodec pcm_s16le -ac 2 -ar 44100 %s -y"
- % (inp_path, tmp_path)
- )
- inp_path = tmp_path
- try:
- if done == 0:
- pre_fun.path_audio(
- inp_path, save_root_ins, save_root_vocal, format0
- )
- print("%s->Success" % (os.path.basename(inp_path)))
- except:
- try:
- if done == 0:
- pre_fun._path_audio_(
- inp_path, save_root_ins, save_root_vocal, format0
- )
- print("%s->Success" % (os.path.basename(inp_path)))
- except:
- print(
- "%s->%s"
- % (os.path.basename(inp_path), traceback.format_exc())
- )
- except:
- print(traceback.format_exc())
- finally:
- try:
- if model_name == "onnx_dereverb_By_FoxJoy":
- del pre_fun.pred.model
- del pre_fun.pred.model_
- else:
- del pre_fun.model
- del pre_fun
- return i18n("Finished"), vocal_audio_path, instrumental_audio_path
- except:
- traceback.print_exc()
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- print("Executed torch.cuda.empty_cache()")
- elif architecture == "MDX":
- try:
- print(i18n("Starting audio conversion... (This might take a moment)"))
- inp_root, save_root_vocal, save_root_ins = [
- x.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
- for x in [inp_root, save_root_vocal, save_root_ins]
- ]
-
- usable_files = [
- os.path.join(inp_root, file)
- for file in os.listdir(inp_root)
- if file.endswith(tuple(sup_audioext))
- ]
- try:
- if paths != None:
- paths = [path.name for path in paths]
- else:
- paths = usable_files
-
- except:
- traceback.print_exc()
- paths = usable_files
- print(paths)
- invert = True
- denoise = True
- use_custom_parameter = True
- dim_f = 2048
- dim_t = 256
- n_fft = 7680
- use_custom_compensation = True
- compensation = 1.025
- suffix = "vocal_" # @param ["Vocals", "Drums", "Bass", "Other"]{allow-input: true}
- suffix_invert = "instrument_" # @param ["Instrumental", "Drumless", "Bassless", "Instruments"]{allow-input: true}
- print_settings = True # @param{type:"boolean"}
- onnx = id_to_ptm(model_name)
- compensation = (
- compensation
- if use_custom_compensation or use_custom_parameter
- else None
- )
- mdx_model = prepare_mdx(
- onnx,
- use_custom_parameter,
- dim_f,
- dim_t,
- n_fft,
- compensation=compensation,
- )
-
- for path in paths:
- # inp_path = os.path.join(inp_root, path)
- suffix_naming = suffix if use_custom_parameter else None
- diff_suffix_naming = suffix_invert if use_custom_parameter else None
- run_mdx(
- onnx,
- mdx_model,
- path,
- format0,
- diff=invert,
- suffix=suffix_naming,
- diff_suffix=diff_suffix_naming,
- denoise=denoise,
- )
-
- if print_settings:
- print()
- print("[MDX-Net_Colab settings used]")
- print(f"Model used: {onnx}")
- print(f"Model MD5: {mdx.MDX.get_hash(onnx)}")
- print(f"Model parameters:")
- print(f" -dim_f: {mdx_model.dim_f}")
- print(f" -dim_t: {mdx_model.dim_t}")
- print(f" -n_fft: {mdx_model.n_fft}")
- print(f" -compensation: {mdx_model.compensation}")
- print()
- print("[Input file]")
- print("filename(s): ")
- for filename in paths:
- print(f" -{filename}")
- print(f"{os.path.basename(filename)}->Success")
- except:
- traceback.print_exc()
- finally:
- try:
- del mdx_model
- return (
- i18n("Finished"),
- vocal_audio_path_mdx,
- instrumental_audio_path_mdx,
- )
- except:
- traceback.print_exc()
-
- print("clean_empty_cache")
-
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
-
-
-def load_downloaded_audio(url):
- parent_path = find_folder_parent(now_dir, "assets")
- try:
- infos = []
- audios_path = os.path.join(parent_path, "assets", "audios")
- zips_path = os.path.join(parent_path, "assets", "zips")
-
- if not os.path.exists(audios_path):
- os.mkdir(audios_path)
-
- download_file = download_from_url(url)
- if not download_file:
- print(i18n("The file could not be downloaded."))
- infos.append(i18n("The file could not be downloaded."))
- yield "\n".join(infos)
- elif download_file == "downloaded":
- print(i18n("It has been downloaded successfully."))
- infos.append(i18n("It has been downloaded successfully."))
- yield "\n".join(infos)
- elif download_file == "too much use":
- raise Exception(
- i18n("Too many users have recently viewed or downloaded this file")
- )
- elif download_file == "private link":
- raise Exception(i18n("Cannot get file from this private link"))
-
- for filename in os.listdir(zips_path):
- item_path = os.path.join(zips_path, filename)
- if item_path.split(".")[-1] in sup_audioext:
- if os.path.exists(item_path):
- shutil.move(item_path, audios_path)
-
- result = ""
- print(i18n("Audio files have been moved to the 'audios' folder."))
- infos.append(i18n("Audio files have been moved to the 'audios' folder."))
- yield "\n".join(infos)
-
- os.chdir(parent_path)
- return result
- except Exception as e:
- os.chdir(parent_path)
- if "too much use" in str(e):
- print(i18n("Too many users have recently viewed or downloaded this file"))
- yield i18n("Too many users have recently viewed or downloaded this file")
- elif "private link" in str(e):
- print(i18n("Cannot get file from this private link"))
- yield i18n("Cannot get file from this private link")
- else:
- print(e)
- yield i18n("An error occurred downloading")
- finally:
- os.chdir(parent_path)
-
-
-class error_message(Exception):
- def __init__(self, mensaje):
- self.mensaje = mensaje
- super().__init__(mensaje)
-
-
-def get_vc(sid, to_return_protect0, to_return_protect1):
- global n_spk, tgt_sr, net_g, vc, cpt, version
- if sid == "" or sid == []:
- global hubert_model
- if hubert_model is not None:
- print("clean_empty_cache")
- del net_g, n_spk, vc, hubert_model, tgt_sr
- hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=config.is_half
- )
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g, cpt
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- cpt = None
- return (
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- {"visible": False, "__type__": "update"},
- )
- person = "%s/%s" % (weight_root, sid)
- print("loading %s" % person)
- cpt = torch.load(person, map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0]
- if_f0 = cpt.get("f0", 1)
- if if_f0 == 0:
- to_return_protect0 = to_return_protect1 = {
- "visible": False,
- "value": 0.5,
- "__type__": "update",
- }
- else:
- to_return_protect0 = {
- "visible": True,
- "value": to_return_protect0,
- "__type__": "update",
- }
- to_return_protect1 = {
- "visible": True,
- "value": to_return_protect1,
- "__type__": "update",
- }
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False))
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- n_spk = cpt["config"][-3]
- return (
- {"visible": True, "maximum": n_spk, "__type__": "update"},
- to_return_protect0,
- to_return_protect1,
- )
-
-
-def update_model_choices(select_value):
- model_ids = get_model_list()
- model_ids_list = list(model_ids)
- if select_value == "VR":
- return {"choices": uvr5_names, "__type__": "update"}
- elif select_value == "MDX":
- return {"choices": model_ids_list, "__type__": "update"}
-
-
-def save_drop_model_pth(dropbox):
- file_path = dropbox.name
- file_name = os.path.basename(file_path)
- target_path = os.path.join("logs", "weights", os.path.basename(file_path))
-
- if not file_name.endswith('.pth'):
- print(i18n("The file does not have the .pth extension. Please upload the correct file."))
- return None
-
- shutil.move(file_path, target_path)
- return target_path
-
-def extract_folder_name(file_name):
- match = re.search(r'nprobe_(.*?)\.index', file_name)
-
- if match:
- return match.group(1)
- else:
- return
-
-def save_drop_model_index(dropbox):
- file_path = dropbox.name
- file_name = os.path.basename(file_path)
- folder_name = extract_folder_name(file_name)
-
- if not file_name.endswith('.index'):
- print(i18n("The file does not have the .index extension. Please upload the correct file."))
- return None
-
- out_path = os.path.join("logs", folder_name)
- os.mkdir(out_path)
-
- target_path = os.path.join(out_path, os.path.basename(file_path))
-
- shutil.move(file_path, target_path)
- return target_path
-
-
-def download_model():
- gr.Markdown(value="# " + i18n("Download Model"))
- gr.Markdown(value=i18n("It is used to download your inference models."))
- with gr.Row():
- model_url = gr.Textbox(label=i18n("Url:"))
- with gr.Row():
- download_model_status_bar = gr.Textbox(label=i18n("Status:"))
- with gr.Row():
- download_button = gr.Button(i18n("Download"))
- download_button.click(
- fn=load_downloaded_model,
- inputs=[model_url],
- outputs=[download_model_status_bar],
- )
- gr.Markdown(value=i18n("You can also drop your files to load your model."))
- with gr.Row():
- dropbox_pth = gr.File(label=i18n("Drag your .pth file here:"))
- dropbox_index = gr.File(label=i18n("Drag your .index file here:"))
-
- dropbox_pth.upload(
- fn=save_drop_model_pth,
- inputs=[dropbox_pth],
- )
- dropbox_index.upload(
- fn=save_drop_model_index,
- inputs=[dropbox_index],
- )
-
-
-def download_backup():
- gr.Markdown(value="# " + i18n("Download Backup"))
- gr.Markdown(value=i18n("It is used to download your training backups."))
- with gr.Row():
- model_url = gr.Textbox(label=i18n("Url:"))
- with gr.Row():
- download_model_status_bar = gr.Textbox(label=i18n("Status:"))
- with gr.Row():
- download_button = gr.Button(i18n("Download"))
- download_button.click(
- fn=load_downloaded_backup,
- inputs=[model_url],
- outputs=[download_model_status_bar],
- )
-
-
-def update_dataset_list(name):
- new_datasets = []
- file_path = find_folder_parent(now_dir, "assets")
- for foldername in os.listdir("./datasets"):
- if "." not in foldername:
- new_datasets.append(
- os.path.join(
- file_path, "datasets", foldername
- )
- )
- return gr.Dropdown.update(choices=new_datasets)
-
-
-def download_dataset(trainset_dir4):
- gr.Markdown(value="# " + i18n("Download Dataset"))
- gr.Markdown(
- value=i18n(
- "Download the dataset with the audios in a compatible format (.wav/.flac) to train your model."
- )
- )
- with gr.Row():
- dataset_url = gr.Textbox(label=i18n("Url:"))
- with gr.Row():
- load_dataset_status_bar = gr.Textbox(label=i18n("Status:"))
- with gr.Row():
- load_dataset_button = gr.Button(i18n("Download"))
- load_dataset_button.click(
- fn=load_dowloaded_dataset,
- inputs=[dataset_url],
- outputs=[load_dataset_status_bar],
- )
- load_dataset_status_bar.change(update_dataset_list, dataset_url, trainset_dir4)
-
-
-def download_audio():
- gr.Markdown(value="# " + i18n("Download Audio"))
- gr.Markdown(
- value=i18n(
- "Download audios of any format for use in inference (recommended for mobile users)."
- )
- )
- with gr.Row():
- audio_url = gr.Textbox(label=i18n("Url:"))
- with gr.Row():
- download_audio_status_bar = gr.Textbox(label=i18n("Status:"))
- with gr.Row():
- download_button2 = gr.Button(i18n("Download"))
- download_button2.click(
- fn=load_downloaded_audio,
- inputs=[audio_url],
- outputs=[download_audio_status_bar],
- )
-
-
-def youtube_separator():
- gr.Markdown(value="# " + i18n("Separate YouTube tracks"))
- gr.Markdown(
- value=i18n(
- "Download audio from a YouTube video and automatically separate the vocal and instrumental tracks"
- )
- )
- with gr.Row():
- input_url = gr.inputs.Textbox(label=i18n("Enter the YouTube link:"))
- output_path = gr.Textbox(
- label=i18n(
- "Enter the path of the audio folder to be processed (copy it from the address bar of the file manager):"
- ),
- value=os.path.abspath(os.getcwd()).replace("\\", "/") + "/yt_downloads",
- visible=False,
- )
- advanced_settings_checkbox = gr.Checkbox(
- value=False,
- label=i18n("Advanced Settings"),
- interactive=True,
- )
- with gr.Row(
- label=i18n("Advanced Settings"), visible=False, variant="compact"
- ) as advanced_settings:
- with gr.Column():
- model_select = gr.Radio(
- label=i18n("Model Architecture:"),
- choices=["VR", "MDX"],
- value="VR",
- interactive=True,
- )
- model_choose = gr.Dropdown(
- label=i18n(
- "Model: (Be aware that in some models the named vocal will be the instrumental)"
- ),
- choices=uvr5_names,
- value="HP5_only_main_vocal",
- )
- with gr.Row():
- agg = gr.Slider(
- minimum=0,
- maximum=20,
- step=1,
- label=i18n("Vocal Extraction Aggressive"),
- value=10,
- interactive=True,
- )
- with gr.Row():
- opt_vocal_root = gr.Textbox(
- label=i18n("Specify the output folder for vocals:"),
- value=((os.getcwd()).replace("\\", "/") + "/assets/audios"),
- )
- opt_ins_root = gr.Textbox(
- label=i18n("Specify the output folder for accompaniment:"),
- value=((os.getcwd()).replace("\\", "/") + "/assets/audios/audio-others"),
- )
- dir_wav_input = gr.Textbox(
- label=i18n("Enter the path of the audio folder to be processed:"),
- value=((os.getcwd()).replace("\\", "/") + "/yt_downloads"),
- visible=False,
- )
- format0 = gr.Radio(
- label=i18n("Export file format"),
- choices=["wav", "flac", "mp3", "m4a"],
- value="wav",
- visible=False,
- interactive=True,
- )
- wav_inputs = gr.File(
- file_count="multiple",
- label=i18n(
- "You can also input audio files in batches. Choose one of the two options. Priority is given to reading from the folder."
- ),
- visible=False,
- )
- model_select.change(
- fn=update_model_choices,
- inputs=model_select,
- outputs=model_choose,
- )
- with gr.Row():
- vc_output4 = gr.Textbox(label=i18n("Status:"))
- vc_output5 = gr.Audio(label=i18n("Vocal"), type="filepath")
- vc_output6 = gr.Audio(label=i18n("Instrumental"), type="filepath")
- with gr.Row():
- but2 = gr.Button(i18n("Download and Separate"))
- but2.click(
- uvr,
- [
- input_url,
- output_path,
- model_choose,
- dir_wav_input,
- opt_vocal_root,
- wav_inputs,
- opt_ins_root,
- agg,
- format0,
- model_select,
- ],
- [vc_output4, vc_output5, vc_output6],
- )
-
- def toggle_advanced_settings(checkbox):
- return {"visible": checkbox, "__type__": "update"}
-
- advanced_settings_checkbox.change(
- fn=toggle_advanced_settings,
- inputs=[advanced_settings_checkbox],
- outputs=[advanced_settings],
- )
-
-
diff --git "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT_bing\347\211\210.py" "b/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT_bing\347\211\210.py"
deleted file mode 100644
index db5adb7992f765db3e5b0e7ecea7e71e44dbe855..0000000000000000000000000000000000000000
--- "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT_bing\347\211\210.py"
+++ /dev/null
@@ -1,106 +0,0 @@
-from toolbox import CatchException, update_ui
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
-import requests
-from bs4 import BeautifulSoup
-from request_llm.bridge_all import model_info
-
-
-def bing_search(query, proxies=None):
- query = query
- url = f"https://cn.bing.com/search?q={query}"
- headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36'}
- response = requests.get(url, headers=headers, proxies=proxies)
- soup = BeautifulSoup(response.content, 'html.parser')
- results = []
- for g in soup.find_all('li', class_='b_algo'):
- anchors = g.find_all('a')
- if anchors:
- link = anchors[0]['href']
- if not link.startswith('http'):
- continue
- title = g.find('h2').text
- item = {'title': title, 'link': link}
- results.append(item)
-
- for r in results:
- print(r['link'])
- return results
-
-
-def scrape_text(url, proxies) -> str:
- """Scrape text from a webpage
-
- Args:
- url (str): The URL to scrape text from
-
- Returns:
- str: The scraped text
- """
- headers = {
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
- 'Content-Type': 'text/plain',
- }
- try:
- response = requests.get(url, headers=headers, proxies=proxies, timeout=8)
- if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
- except:
- return "无法连接到该网页"
- soup = BeautifulSoup(response.text, "html.parser")
- for script in soup(["script", "style"]):
- script.extract()
- text = soup.get_text()
- lines = (line.strip() for line in text.splitlines())
- chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
- text = "\n".join(chunk for chunk in chunks if chunk)
- return text
-
-@CatchException
-def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- """
- txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
- llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
- plugin_kwargs 插件模型的参数,暂时没有用武之地
- chatbot 聊天显示框的句柄,用于显示给用户
- history 聊天历史,前情提要
- system_prompt 给gpt的静默提醒
- web_port 当前软件运行的端口号
- """
- history = [] # 清空历史,以免输入溢出
- chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
- "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR!"))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
-
- # ------------- < 第1步:爬取搜索引擎的结果 > -------------
- from toolbox import get_conf
- proxies, = get_conf('proxies')
- urls = bing_search(txt, proxies)
- history = []
- if len(urls) == 0:
- chatbot.append((f"结论:{txt}",
- "[Local Message] 受到bing限制,无法从bing获取信息!"))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
- return
- # ------------- < 第2步:依次访问网页 > -------------
- max_search_result = 8 # 最多收纳多少个网页的结果
- for index, url in enumerate(urls[:max_search_result]):
- res = scrape_text(url['link'], proxies)
- history.extend([f"第{index}份搜索结果:", res])
- chatbot.append([f"第{index}份搜索结果:", res[:500]+"......"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
-
- # ------------- < 第3步:ChatGPT综合 > -------------
- i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}"
- i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
- inputs=i_say,
- history=history,
- max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4
- )
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say, inputs_show_user=i_say,
- llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
- sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。"
- )
- chatbot[-1] = (i_say, gpt_say)
- history.append(i_say);history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
-
diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/abinet_academic.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/abinet_academic.py
deleted file mode 100644
index 4abb87a6ee576a6c8a299d30baf4fee2ae56a1bf..0000000000000000000000000000000000000000
--- a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/abinet/abinet_academic.py
+++ /dev/null
@@ -1,35 +0,0 @@
-_base_ = [
- '../../_base_/default_runtime.py',
- '../../_base_/schedules/schedule_adam_step_20e.py',
- '../../_base_/recog_pipelines/abinet_pipeline.py',
- '../../_base_/recog_models/abinet.py',
- # '../../_base_/recog_datasets/ST_MJ_alphanumeric_train.py',
- '../../_base_/recog_datasets/toy_data.py'
- # '../../_base_/recog_datasets/academic_test.py'
-]
-
-train_list = {{_base_.train_list}}
-test_list = {{_base_.test_list}}
-
-train_pipeline = {{_base_.train_pipeline}}
-test_pipeline = {{_base_.test_pipeline}}
-
-data = dict(
- samples_per_gpu=192,
- workers_per_gpu=8,
- val_dataloader=dict(samples_per_gpu=1),
- test_dataloader=dict(samples_per_gpu=1),
- train=dict(
- type='UniformConcatDataset',
- datasets=train_list,
- pipeline=train_pipeline),
- val=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline),
- test=dict(
- type='UniformConcatDataset',
- datasets=test_list,
- pipeline=test_pipeline))
-
-evaluation = dict(interval=1, metric='acc')
diff --git a/spaces/LucasCodeBreak/MusicGen/tests/modules/test_rope.py b/spaces/LucasCodeBreak/MusicGen/tests/modules/test_rope.py
deleted file mode 100644
index 067c6f067acbf27fb0fef5c2b812c22474c4fcd0..0000000000000000000000000000000000000000
--- a/spaces/LucasCodeBreak/MusicGen/tests/modules/test_rope.py
+++ /dev/null
@@ -1,168 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from audiocraft.modules.rope import RotaryEmbedding
-from audiocraft.modules.transformer import StreamingTransformer, set_efficient_attention_backend
-
-
-def test_rope():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert list(xq_out.shape) == [B, T, H, C]
- assert list(xk_out.shape) == [B, T, H, C]
-
-
-def test_rope_io_dtypes():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32)
- rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64)
-
- # Test bfloat16 inputs w/ both 32 and 64 precision rope.
- xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16)
- xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16)
- xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16)
- assert xq_out.dtype == torch.bfloat16
- xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16)
- assert xq_out.dtype == torch.bfloat16
-
- # Test float32 inputs w/ both 32 and 64 precision rope.
- xq_32 = torch.rand((B, T, H, C)).to(torch.float32)
- xk_32 = torch.rand((B, T, H, C)).to(torch.float32)
- xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32)
- assert xq_out.dtype == torch.float32
- xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32)
- assert xq_out.dtype == torch.float32
-
-
-def test_transformer_with_rope():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- for pos in ['rope', 'sin_rope']:
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1,
- positional_embedding=pos)
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- out = tr(x)
- assert list(out.shape) == list(x.shape)
-
-
-@torch.no_grad()
-def test_rope_streaming():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, causal=True, dropout=0.,
- custom=True, positional_embedding='rope')
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- ref = tr(x)
-
- with tr.streaming():
- outs = []
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr(frame))
-
- out = torch.cat(outs, dim=1)
- assert list(out.shape) == [3, steps, 16]
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-@torch.no_grad()
-def test_rope_streaming_past_context():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
-
- for context in [None, 10]:
- tr = StreamingTransformer(
- 16, 4, 1 if context else 2,
- causal=True, past_context=context, custom=True,
- dropout=0., positional_embedding='rope')
- tr.eval()
-
- steps = 20
- x = torch.randn(3, steps, 16)
- ref = tr(x)
-
- with tr.streaming():
- outs = []
- frame_sizes = [1] * steps
-
- for frame_size in frame_sizes:
- frame = x[:, :frame_size]
- x = x[:, frame_size:]
- outs.append(tr(frame))
-
- out = torch.cat(outs, dim=1)
- assert list(out.shape) == [3, steps, 16]
- delta = torch.norm(out - ref) / torch.norm(out)
- assert delta < 1e-6, delta
-
-
-def test_rope_memory_efficient():
- set_efficient_attention_backend('xformers')
- torch.manual_seed(1234)
- tr = StreamingTransformer(
- 16, 4, 2, custom=True, dropout=0., layer_scale=0.1,
- positional_embedding='rope')
- tr_mem_efficient = StreamingTransformer(
- 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1,
- positional_embedding='rope')
- tr_mem_efficient.load_state_dict(tr.state_dict())
- tr.eval()
- steps = 12
- x = torch.randn(3, steps, 16)
-
- with torch.no_grad():
- y = tr(x)
- y2 = tr_mem_efficient(x)
- # Check at float precision b/c this is the rope default.
- assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm()
-
-
-def test_rope_with_xpos():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C, xpos=True)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert list(xq_out.shape) == [B, T, H, C]
- assert list(xk_out.shape) == [B, T, H, C]
-
-
-def test_positional_scale():
- set_efficient_attention_backend('xformers')
- B, T, H, C = 8, 75, 16, 128
-
- rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0)
- xq = torch.rand((B, T, H, C))
- xk = torch.rand((B, T, H, C))
- xq_out, xk_out = rope.rotate_qk(xq, xk, start=7)
-
- assert torch.allclose(xq, xq_out)
- assert torch.allclose(xk, xk_out)
diff --git a/spaces/Mahiruoshi/BangDream-Bert-VITS2/text/chinese_bert.py b/spaces/Mahiruoshi/BangDream-Bert-VITS2/text/chinese_bert.py
deleted file mode 100644
index 8159425df4bf7e577008b22f44e84f3147fdce14..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/BangDream-Bert-VITS2/text/chinese_bert.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import torch
-import sys
-from transformers import AutoTokenizer, AutoModelForMaskedLM
-
-tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large")
-
-models = dict()
-
-
-def get_bert_feature(text, word2ph, device=None):
- if (
- sys.platform == "darwin"
- and torch.backends.mps.is_available()
- and device == "cpu"
- ):
- device = "mps"
- if not device:
- device = "cuda"
- if device not in models.keys():
- models[device] = AutoModelForMaskedLM.from_pretrained(
- "./bert/chinese-roberta-wwm-ext-large"
- ).to(device)
- with torch.no_grad():
- inputs = tokenizer(text, return_tensors="pt")
- for i in inputs:
- inputs[i] = inputs[i].to(device)
- res = models[device](**inputs, output_hidden_states=True)
- res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu()
-
- assert len(word2ph) == len(text) + 2
- word2phone = word2ph
- phone_level_feature = []
- for i in range(len(word2phone)):
- repeat_feature = res[i].repeat(word2phone[i], 1)
- phone_level_feature.append(repeat_feature)
-
- phone_level_feature = torch.cat(phone_level_feature, dim=0)
-
- return phone_level_feature.T
-
-
-if __name__ == "__main__":
- import torch
-
- word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征
- word2phone = [
- 1,
- 2,
- 1,
- 2,
- 2,
- 1,
- 2,
- 2,
- 1,
- 2,
- 2,
- 1,
- 2,
- 2,
- 2,
- 2,
- 2,
- 1,
- 1,
- 2,
- 2,
- 1,
- 2,
- 2,
- 2,
- 2,
- 1,
- 2,
- 2,
- 2,
- 2,
- 2,
- 1,
- 2,
- 2,
- 2,
- 2,
- 1,
- ]
-
- # 计算总帧数
- total_frames = sum(word2phone)
- print(word_level_feature.shape)
- print(word2phone)
- phone_level_feature = []
- for i in range(len(word2phone)):
- print(word_level_feature[i].shape)
-
- # 对每个词重复word2phone[i]次
- repeat_feature = word_level_feature[i].repeat(word2phone[i], 1)
- phone_level_feature.append(repeat_feature)
-
- phone_level_feature = torch.cat(phone_level_feature, dim=0)
- print(phone_level_feature.shape) # torch.Size([36, 1024])
diff --git a/spaces/Mahiruoshi/MyGO_VIts-bert/train_ms.py b/spaces/Mahiruoshi/MyGO_VIts-bert/train_ms.py
deleted file mode 100644
index d17da759ac5f25e865f69458280aa28db3b56e1d..0000000000000000000000000000000000000000
--- a/spaces/Mahiruoshi/MyGO_VIts-bert/train_ms.py
+++ /dev/null
@@ -1,598 +0,0 @@
-# flake8: noqa: E402
-
-import os
-import torch
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.cuda.amp import autocast, GradScaler
-from tqdm import tqdm
-import logging
-
-logging.getLogger("numba").setLevel(logging.WARNING)
-import commons
-import utils
-from data_utils import (
- TextAudioSpeakerLoader,
- TextAudioSpeakerCollate,
- DistributedBucketSampler,
-)
-from models import (
- SynthesizerTrn,
- MultiPeriodDiscriminator,
- DurationDiscriminator,
-)
-from losses import generator_loss, discriminator_loss, feature_loss, kl_loss
-from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-from text.symbols import symbols
-
-torch.backends.cuda.matmul.allow_tf32 = True
-torch.backends.cudnn.allow_tf32 = (
- True # If encontered training problem,please try to disable TF32.
-)
-torch.set_float32_matmul_precision("medium")
-torch.backends.cudnn.benchmark = True
-torch.backends.cuda.sdp_kernel("flash")
-torch.backends.cuda.enable_flash_sdp(True)
-torch.backends.cuda.enable_mem_efficient_sdp(
- True
-) # Not available if torch version is lower than 2.0
-torch.backends.cuda.enable_math_sdp(True)
-global_step = 0
-
-import os
-
-os.environ['MASTER_ADDR'] = '127.0.0.1'
-os.environ['MASTER_PORT'] = '8880'
-os.environ['WORLD_SIZE'] = '1'
-os.environ['RANK'] = '0'
-
-def run():
- dist.init_process_group(
- backend="gloo",
- init_method="env://", # Due to some training problem,we proposed to use gloo instead of nccl.
- ) # Use torchrun instead of mp.spawn
- rank = dist.get_rank()
- n_gpus = dist.get_world_size()
- hps = utils.get_hparams()
- torch.manual_seed(hps.train.seed)
- torch.cuda.set_device(rank)
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
- train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data)
- train_sampler = DistributedBucketSampler(
- train_dataset,
- hps.train.batch_size,
- [32, 300, 400, 500, 600, 700, 800, 900, 1000],
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True,
- )
- collate_fn = TextAudioSpeakerCollate()
- train_loader = DataLoader(
- train_dataset,
- num_workers=16,
- shuffle=False,
- pin_memory=True,
- collate_fn=collate_fn,
- batch_sampler=train_sampler,
- persistent_workers=True,
- prefetch_factor=4,
- ) # DataLoader config could be adjusted.
- if rank == 0:
- eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data)
- eval_loader = DataLoader(
- eval_dataset,
- num_workers=0,
- shuffle=False,
- batch_size=1,
- pin_memory=True,
- drop_last=False,
- collate_fn=collate_fn,
- )
- if (
- "use_noise_scaled_mas" in hps.model.keys()
- and hps.model.use_noise_scaled_mas is True
- ):
- print("Using noise scaled MAS for VITS2")
- mas_noise_scale_initial = 0.01
- noise_scale_delta = 2e-6
- else:
- print("Using normal MAS for VITS1")
- mas_noise_scale_initial = 0.0
- noise_scale_delta = 0.0
- if (
- "use_duration_discriminator" in hps.model.keys()
- and hps.model.use_duration_discriminator is True
- ):
- print("Using duration discriminator for VITS2")
- net_dur_disc = DurationDiscriminator(
- hps.model.hidden_channels,
- hps.model.hidden_channels,
- 3,
- 0.1,
- gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0,
- ).cuda(rank)
- if (
- "use_spk_conditioned_encoder" in hps.model.keys()
- and hps.model.use_spk_conditioned_encoder is True
- ):
- if hps.data.n_speakers == 0:
- raise ValueError(
- "n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model"
- )
- else:
- print("Using normal encoder for VITS1")
-
- net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- mas_noise_scale_initial=mas_noise_scale_initial,
- noise_scale_delta=noise_scale_delta,
- **hps.model,
- ).cuda(rank)
-
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
- optim_g = torch.optim.AdamW(
- filter(lambda p: p.requires_grad, net_g.parameters()),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps,
- )
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps,
- )
- if net_dur_disc is not None:
- optim_dur_disc = torch.optim.AdamW(
- net_dur_disc.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps,
- )
- else:
- optim_dur_disc = None
- net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True)
- net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True)
- if net_dur_disc is not None:
- net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True)
- try:
- if net_dur_disc is not None:
- _, _, dur_resume_lr, epoch_str = utils.load_checkpoint(
- utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"),
- net_dur_disc,
- optim_dur_disc,
- skip_optimizer=hps.train.skip_optimizer
- if "skip_optimizer" in hps.train
- else True,
- )
- _, optim_g, g_resume_lr, epoch_str = utils.load_checkpoint(
- utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"),
- net_g,
- optim_g,
- skip_optimizer=hps.train.skip_optimizer
- if "skip_optimizer" in hps.train
- else True,
- )
- _, optim_d, d_resume_lr, epoch_str = utils.load_checkpoint(
- utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"),
- net_d,
- optim_d,
- skip_optimizer=hps.train.skip_optimizer
- if "skip_optimizer" in hps.train
- else True,
- )
- if not optim_g.param_groups[0].get("initial_lr"):
- optim_g.param_groups[0]["initial_lr"] = g_resume_lr
- if not optim_d.param_groups[0].get("initial_lr"):
- optim_d.param_groups[0]["initial_lr"] = d_resume_lr
-
- epoch_str = max(epoch_str, 1)
- global_step = (epoch_str - 1) * len(train_loader)
- except Exception as e:
- print(e)
- epoch_str = 1
- global_step = 0
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(
- optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2
- )
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(
- optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2
- )
- if net_dur_disc is not None:
- if not optim_dur_disc.param_groups[0].get("initial_lr"):
- optim_dur_disc.param_groups[0]["initial_lr"] = dur_resume_lr
- scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(
- optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2
- )
- else:
- scheduler_dur_disc = None
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank == 0:
- train_and_evaluate(
- rank,
- epoch,
- hps,
- [net_g, net_d, net_dur_disc],
- [optim_g, optim_d, optim_dur_disc],
- [scheduler_g, scheduler_d, scheduler_dur_disc],
- scaler,
- [train_loader, eval_loader],
- logger,
- [writer, writer_eval],
- )
- else:
- train_and_evaluate(
- rank,
- epoch,
- hps,
- [net_g, net_d, net_dur_disc],
- [optim_g, optim_d, optim_dur_disc],
- [scheduler_g, scheduler_d, scheduler_dur_disc],
- scaler,
- [train_loader, None],
- None,
- None,
- )
- scheduler_g.step()
- scheduler_d.step()
- if net_dur_disc is not None:
- scheduler_dur_disc.step()
-
-
-def train_and_evaluate(
- rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers
-):
- net_g, net_d, net_dur_disc = nets
- optim_g, optim_d, optim_dur_disc = optims
- scheduler_g, scheduler_d, scheduler_dur_disc = schedulers
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
- if net_dur_disc is not None:
- net_dur_disc.train()
- for batch_idx, (
- x,
- x_lengths,
- spec,
- spec_lengths,
- y,
- y_lengths,
- speakers,
- tone,
- language,
- bert,
- ja_bert,
- ) in tqdm(enumerate(train_loader)):
- if net_g.module.use_noise_scaled_mas:
- current_mas_noise_scale = (
- net_g.module.mas_noise_scale_initial
- - net_g.module.noise_scale_delta * global_step
- )
- net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0)
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(
- rank, non_blocking=True
- )
- spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(
- rank, non_blocking=True
- )
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(
- rank, non_blocking=True
- )
- speakers = speakers.cuda(rank, non_blocking=True)
- tone = tone.cuda(rank, non_blocking=True)
- language = language.cuda(rank, non_blocking=True)
- bert = bert.cuda(rank, non_blocking=True)
- ja_bert = ja_bert.cuda(rank, non_blocking=True)
-
- with autocast(enabled=hps.train.fp16_run):
- (
- y_hat,
- l_length,
- attn,
- ids_slice,
- x_mask,
- z_mask,
- (z, z_p, m_p, logs_p, m_q, logs_q),
- (hidden_x, logw, logw_),
- ) = net_g(
- x,
- x_lengths,
- spec,
- spec_lengths,
- speakers,
- tone,
- language,
- bert,
- ja_bert,
- )
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax,
- )
- y_mel = commons.slice_segments(
- mel, ids_slice, hps.train.segment_size // hps.data.hop_length
- )
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax,
- )
-
- y = commons.slice_segments(
- y, ids_slice * hps.data.hop_length, hps.train.segment_size
- ) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(
- y_d_hat_r, y_d_hat_g
- )
- loss_disc_all = loss_disc
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(
- hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()
- )
- with autocast(enabled=False):
- # TODO: I think need to mean using the mask, but for now, just mean all
- (
- loss_dur_disc,
- losses_dur_disc_r,
- losses_dur_disc_g,
- ) = discriminator_loss(y_dur_hat_r, y_dur_hat_g)
- loss_dur_disc_all = loss_dur_disc
- optim_dur_disc.zero_grad()
- scaler.scale(loss_dur_disc_all).backward()
- scaler.unscale_(optim_dur_disc)
- commons.clip_grad_value_(net_dur_disc.parameters(), None)
- scaler.step(optim_dur_disc)
-
- optim_d.zero_grad()
- scaler.scale(loss_disc_all).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
- if net_dur_disc is not None:
- y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_)
- with autocast(enabled=False):
- loss_dur = torch.sum(l_length.float())
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
-
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl
- if net_dur_disc is not None:
- loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g)
- loss_gen_all += loss_dur_gen
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank == 0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]["lr"]
- losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl]
- logger.info(
- "Train Epoch: {} [{:.0f}%]".format(
- epoch, 100.0 * batch_idx / len(train_loader)
- )
- )
- logger.info([x.item() for x in losses] + [global_step, lr])
-
- scalar_dict = {
- "loss/g/total": loss_gen_all,
- "loss/d/total": loss_disc_all,
- "learning_rate": lr,
- "grad_norm_d": grad_norm_d,
- "grad_norm_g": grad_norm_g,
- }
- scalar_dict.update(
- {
- "loss/g/fm": loss_fm,
- "loss/g/mel": loss_mel,
- "loss/g/dur": loss_dur,
- "loss/g/kl": loss_kl,
- }
- )
- scalar_dict.update(
- {"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}
- )
- scalar_dict.update(
- {"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}
- )
- scalar_dict.update(
- {"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}
- )
-
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(
- y_mel[0].data.cpu().numpy()
- ),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(
- y_hat_mel[0].data.cpu().numpy()
- ),
- "all/mel": utils.plot_spectrogram_to_numpy(
- mel[0].data.cpu().numpy()
- ),
- "all/attn": utils.plot_alignment_to_numpy(
- attn[0, 0].data.cpu().numpy()
- ),
- }
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict,
- )
-
- if global_step % hps.train.eval_interval == 0:
- evaluate(hps, net_g, eval_loader, writer_eval)
- utils.save_checkpoint(
- net_g,
- optim_g,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "G_{}.pth".format(global_step)),
- )
- utils.save_checkpoint(
- net_d,
- optim_d,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "D_{}.pth".format(global_step)),
- )
- if net_dur_disc is not None:
- utils.save_checkpoint(
- net_dur_disc,
- optim_dur_disc,
- hps.train.learning_rate,
- epoch,
- os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step)),
- )
- keep_ckpts = getattr(hps.train, "keep_ckpts", 5)
- if keep_ckpts > 0:
- utils.clean_checkpoints(
- path_to_models=hps.model_dir,
- n_ckpts_to_keep=keep_ckpts,
- sort_by_time=True,
- )
-
- global_step += 1
-
- if rank == 0:
- logger.info("====> Epoch: {}".format(epoch))
-
-
-def evaluate(hps, generator, eval_loader, writer_eval):
- generator.eval()
- image_dict = {}
- audio_dict = {}
- print("Evaluating ...")
- with torch.no_grad():
- for batch_idx, (
- x,
- x_lengths,
- spec,
- spec_lengths,
- y,
- y_lengths,
- speakers,
- tone,
- language,
- bert,
- ja_bert,
- ) in enumerate(eval_loader):
- x, x_lengths = x.cuda(), x_lengths.cuda()
- spec, spec_lengths = spec.cuda(), spec_lengths.cuda()
- y, y_lengths = y.cuda(), y_lengths.cuda()
- speakers = speakers.cuda()
- bert = bert.cuda()
- ja_bert = ja_bert.cuda()
- tone = tone.cuda()
- language = language.cuda()
- for use_sdp in [True, False]:
- y_hat, attn, mask, *_ = generator.module.infer(
- x,
- x_lengths,
- speakers,
- tone,
- language,
- bert,
- ja_bert,
- y=spec,
- max_len=1000,
- sdp_ratio=0.0 if not use_sdp else 1.0,
- )
- y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length
-
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax,
- )
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1).float(),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax,
- )
- image_dict.update(
- {
- f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(
- y_hat_mel[0].cpu().numpy()
- )
- }
- )
- audio_dict.update(
- {
- f"gen/audio_{batch_idx}_{use_sdp}": y_hat[
- 0, :, : y_hat_lengths[0]
- ]
- }
- )
- image_dict.update(
- {
- f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(
- mel[0].cpu().numpy()
- )
- }
- )
- audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, : y_lengths[0]]})
-
- utils.summarize(
- writer=writer_eval,
- global_step=global_step,
- images=image_dict,
- audios=audio_dict,
- audio_sampling_rate=hps.data.sampling_rate,
- )
- generator.train()
-
-
-if __name__ == "__main__":
- run()
diff --git a/spaces/MatrixYao/how_many_data_points_zh/README.md b/spaces/MatrixYao/how_many_data_points_zh/README.md
deleted file mode 100644
index 0ff4eaa2aacdff74cf7b4367ca8d6d3f99752af0..0000000000000000000000000000000000000000
--- a/spaces/MatrixYao/how_many_data_points_zh/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: How Many Data Points
-emoji: 🦀
-colorFrom: red
-colorTo: yellow
-sdk: docker
-pinned: false
-app_port: 5006
-duplicated_from: teven-projects/how_many_data_points
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Matthijs/mms-tts-demo/uroman/bin/uroman.pl b/spaces/Matthijs/mms-tts-demo/uroman/bin/uroman.pl
deleted file mode 100644
index f1182aee6e5c3422882150b5babeec664b689401..0000000000000000000000000000000000000000
--- a/spaces/Matthijs/mms-tts-demo/uroman/bin/uroman.pl
+++ /dev/null
@@ -1,138 +0,0 @@
-#!/usr/bin/perl -w
-
-# uroman Nov. 12, 2015 - Apr. 23, 2021
-$version = "v1.2.8";
-# Author: Ulf Hermjakob
-
-# Usage: uroman.pl {-l [ara|bel|bul|deu|ell|eng|fas|grc|heb|kaz|kir|lav|lit|mkd|mkd2|oss|pnt|rus|srp|srp2|tur|uig|ukr|yid]} {--chart|--offset-mapping} {--no-cache} {--workset} < STDIN
-# Example: cat workset.txt | uroman.pl --offset-mapping --workset
-
-$|=1;
-
-use FindBin;
-use Cwd "abs_path";
-use File::Basename qw(dirname);
-use File::Spec;
-
-my $bin_dir = abs_path(dirname($0));
-my $root_dir = File::Spec->catfile($bin_dir, File::Spec->updir());
-my $data_dir = File::Spec->catfile($root_dir, "data");
-my $lib_dir = File::Spec->catfile($root_dir, "lib");
-
-use lib "$FindBin::Bin/../lib";
-use NLP::Chinese;
-use NLP::Romanizer;
-use NLP::UTF8;
-use NLP::utilities;
-use JSON;
-$chinesePM = NLP::Chinese;
-$romanizer = NLP::Romanizer;
-$util = NLP::utilities;
-%ht = ();
-%pinyin_ht = ();
-$lang_code = "";
-$return_chart_p = 0;
-$return_offset_mappings_p = 0;
-$workset_p = 0;
-$cache_rom_tokens_p = 1;
-
-$script_data_filename = File::Spec->catfile($data_dir, "Scripts.txt");
-$unicode_data_overwrite_filename = File::Spec->catfile($data_dir, "UnicodeDataOverwrite.txt");
-$unicode_data_filename = File::Spec->catfile($data_dir, "UnicodeData.txt");
-$romanization_table_filename = File::Spec->catfile($data_dir, "romanization-table.txt");
-$chinese_tonal_pinyin_filename = File::Spec->catfile($data_dir, "Chinese_to_Pinyin.txt");
-
-while (@ARGV) {
- $arg = shift @ARGV;
- if ($arg =~ /^-+(l|lc|lang-code)$/) {
- $lang_code = lc (shift @ARGV || "")
- } elsif ($arg =~ /^-+chart$/i) {
- $return_chart_p = 1;
- } elsif ($arg =~ /^-+workset$/i) {
- $workset_p = 1;
- } elsif ($arg =~ /^-+offset[-_]*map/i) {
- $return_offset_mappings_p = 1;
- } elsif ($arg =~ /^-+unicode[-_]?data/i) {
- $filename = shift @ARGV;
- if (-r $filename) {
- $unicode_data_filename = $filename;
- } else {
- print STDERR "Ignoring invalid UnicodeData filename $filename\n";
- }
- } elsif ($arg =~ /^-+(no-tok-cach|no-cach)/i) {
- $cache_rom_tokens_p = 0;
- } else {
- print STDERR "Ignoring unrecognized arg $arg\n";
- }
-}
-
-$romanizer->load_script_data(*ht, $script_data_filename);
-$romanizer->load_unicode_data(*ht, $unicode_data_filename);
-$romanizer->load_unicode_overwrite_romanization(*ht, $unicode_data_overwrite_filename);
-$romanizer->load_romanization_table(*ht, $romanization_table_filename);
-$chinese_to_pinyin_not_yet_loaded_p = 1;
-$current_date = $util->datetime("dateTtime");
-$lang_code_clause = ($lang_code) ? " \"lang-code\":\"$lang_code\",\n" : "";
-
-print "{\n \"romanizer\":\"uroman $version (Ulf Hermjakob, USC/ISI)\",\n \"date\":\"$current_date\",\n$lang_code_clause \"romanization\": [\n" if $return_chart_p;
-my $line_number = 0;
-my $chart_result = "";
-while (<>) {
- $line_number++;
- my $line = $_;
- my $snt_id = "";
- if ($workset_p) {
- next if $line =~ /^#/;
- if (($i_value, $s_value) = ($line =~ /^(\S+\.\d+)\s(.*)$/)) {
- $snt_id = $i_value;
- $line = "$s_value\n";
- } else {
- next;
- }
- }
- if ($chinese_to_pinyin_not_yet_loaded_p && $chinesePM->string_contains_utf8_cjk_unified_ideograph_p($line)) {
- $chinesePM->read_chinese_tonal_pinyin_files(*pinyin_ht, $chinese_tonal_pinyin_filename);
- $chinese_to_pinyin_not_yet_loaded_p = 0;
- }
- if ($return_chart_p) {
- print $chart_result;
- *chart_ht = $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "return chart", $line_number);
- $chart_result = $romanizer->chart_to_json_romanization_elements(0, $chart_ht{N_CHARS}, *chart_ht, $line_number);
- } elsif ($return_offset_mappings_p) {
- ($best_romanization, $offset_mappings) = $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "return offset mappings", $line_number, 0);
- print "::snt-id $snt_id\n" if $workset_p;
- print "::orig $line";
- print "::rom $best_romanization\n";
- print "::align $offset_mappings\n\n";
- } elsif ($cache_rom_tokens_p) {
- print $romanizer->romanize_by_token_with_caching($line, $lang_code, "", *ht, *pinyin_ht, 0, "", $line_number) . "\n";
- } else {
- print $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "", $line_number) . "\n";
- }
-}
-$chart_result =~ s/,(\s*)$/$1/;
-print $chart_result;
-print " ]\n}\n" if $return_chart_p;
-
-$dev_test_p = 0;
-if ($dev_test_p) {
- $n_suspicious_code_points = 0;
- $n_instances = 0;
- foreach $char_name (sort { hex($ht{UTF_NAME_TO_UNICODE}->{$a}) <=> hex($ht{UTF_NAME_TO_UNICODE}->{$b}) }
- keys %{$ht{SUSPICIOUS_ROMANIZATION}}) {
- $unicode_value = $ht{UTF_NAME_TO_UNICODE}->{$char_name};
- $utf8_string = $ht{UTF_NAME_TO_CODE}->{$char_name};
- foreach $romanization (sort keys %{$ht{SUSPICIOUS_ROMANIZATION}->{$char_name}}) {
- $count = $ht{SUSPICIOUS_ROMANIZATION}->{$char_name}->{$romanization};
- $s = ($count == 1) ? "" : "s";
- print STDERR "*** Suspiciously lengthy romanization:\n" unless $n_suspicious_code_points;
- print STDERR "::s $utf8_string ::t $romanization ::comment $char_name (U+$unicode_value)\n";
- $n_suspicious_code_points++;
- $n_instances += $count;
- }
- }
- print STDERR " *** Total of $n_suspicious_code_points suspicious code points ($n_instances instance$s)\n" if $n_suspicious_code_points;
-}
-
-exit 0;
-
diff --git a/spaces/MaxReimann/Whitebox-Style-Transfer-Editing/demo_config.py b/spaces/MaxReimann/Whitebox-Style-Transfer-Editing/demo_config.py
deleted file mode 100644
index f9defdc676c5027ea583ac4a7235acb8abd96351..0000000000000000000000000000000000000000
--- a/spaces/MaxReimann/Whitebox-Style-Transfer-Editing/demo_config.py
+++ /dev/null
@@ -1,2 +0,0 @@
-HUGGING_FACE=True # if run in hugging face. Huggingface uses extra server task for optim
-WORKER_URL="http://94.130.222.54:8080"
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/io.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/io.py
deleted file mode 100644
index aaefde58aa3ea5b58f86249ce7e1c40c186eb8dd..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/fileio/io.py
+++ /dev/null
@@ -1,151 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from io import BytesIO, StringIO
-from pathlib import Path
-
-from ..utils import is_list_of, is_str
-from .file_client import FileClient
-from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler
-
-file_handlers = {
- 'json': JsonHandler(),
- 'yaml': YamlHandler(),
- 'yml': YamlHandler(),
- 'pickle': PickleHandler(),
- 'pkl': PickleHandler()
-}
-
-
-def load(file, file_format=None, file_client_args=None, **kwargs):
- """Load data from json/yaml/pickle files.
-
- This method provides a unified api for loading data from serialized files.
-
- Note:
- In v1.3.16 and later, ``load`` supports loading data from serialized
- files those can be storaged in different backends.
-
- Args:
- file (str or :obj:`Path` or file-like object): Filename or a file-like
- object.
- file_format (str, optional): If not specified, the file format will be
- inferred from the file extension, otherwise use the specified one.
- Currently supported formats include "json", "yaml/yml" and
- "pickle/pkl".
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
-
- Examples:
- >>> load('/path/of/your/file') # file is storaged in disk
- >>> load('https://path/of/your/file') # file is storaged in Internet
- >>> load('s3://path/of/your/file') # file is storaged in petrel
-
- Returns:
- The content from the file.
- """
- if isinstance(file, Path):
- file = str(file)
- if file_format is None and is_str(file):
- file_format = file.split('.')[-1]
- if file_format not in file_handlers:
- raise TypeError(f'Unsupported format: {file_format}')
-
- handler = file_handlers[file_format]
- if is_str(file):
- file_client = FileClient.infer_client(file_client_args, file)
- if handler.str_like:
- with StringIO(file_client.get_text(file)) as f:
- obj = handler.load_from_fileobj(f, **kwargs)
- else:
- with BytesIO(file_client.get(file)) as f:
- obj = handler.load_from_fileobj(f, **kwargs)
- elif hasattr(file, 'read'):
- obj = handler.load_from_fileobj(file, **kwargs)
- else:
- raise TypeError('"file" must be a filepath str or a file-object')
- return obj
-
-
-def dump(obj, file=None, file_format=None, file_client_args=None, **kwargs):
- """Dump data to json/yaml/pickle strings or files.
-
- This method provides a unified api for dumping data as strings or to files,
- and also supports custom arguments for each file format.
-
- Note:
- In v1.3.16 and later, ``dump`` supports dumping data as strings or to
- files which is saved to different backends.
-
- Args:
- obj (any): The python object to be dumped.
- file (str or :obj:`Path` or file-like object, optional): If not
- specified, then the object is dumped to a str, otherwise to a file
- specified by the filename or file-like object.
- file_format (str, optional): Same as :func:`load`.
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
-
- Examples:
- >>> dump('hello world', '/path/of/your/file') # disk
- >>> dump('hello world', 's3://path/of/your/file') # ceph or petrel
-
- Returns:
- bool: True for success, False otherwise.
- """
- if isinstance(file, Path):
- file = str(file)
- if file_format is None:
- if is_str(file):
- file_format = file.split('.')[-1]
- elif file is None:
- raise ValueError(
- 'file_format must be specified since file is None')
- if file_format not in file_handlers:
- raise TypeError(f'Unsupported format: {file_format}')
-
- handler = file_handlers[file_format]
- if file is None:
- return handler.dump_to_str(obj, **kwargs)
- elif is_str(file):
- file_client = FileClient.infer_client(file_client_args, file)
- if handler.str_like:
- with StringIO() as f:
- handler.dump_to_fileobj(obj, f, **kwargs)
- file_client.put_text(f.getvalue(), file)
- else:
- with BytesIO() as f:
- handler.dump_to_fileobj(obj, f, **kwargs)
- file_client.put(f.getvalue(), file)
- elif hasattr(file, 'write'):
- handler.dump_to_fileobj(obj, file, **kwargs)
- else:
- raise TypeError('"file" must be a filename str or a file-object')
-
-
-def _register_handler(handler, file_formats):
- """Register a handler for some file extensions.
-
- Args:
- handler (:obj:`BaseFileHandler`): Handler to be registered.
- file_formats (str or list[str]): File formats to be handled by this
- handler.
- """
- if not isinstance(handler, BaseFileHandler):
- raise TypeError(
- f'handler must be a child of BaseFileHandler, not {type(handler)}')
- if isinstance(file_formats, str):
- file_formats = [file_formats]
- if not is_list_of(file_formats, str):
- raise TypeError('file_formats must be a str or a list of str')
- for ext in file_formats:
- file_handlers[ext] = handler
-
-
-def register_handler(file_formats, **kwargs):
-
- def wrap(cls):
- _register_handler(cls(**kwargs), file_formats)
- return cls
-
- return wrap
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/utils/weight_init.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/utils/weight_init.py
deleted file mode 100644
index 38141ba3d61f64ddfc0a31574b4648cbad96d7dd..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/utils/weight_init.py
+++ /dev/null
@@ -1,62 +0,0 @@
-"""Modified from https://github.com/rwightman/pytorch-image-
-models/blob/master/timm/models/layers/drop.py."""
-
-import math
-import warnings
-
-import torch
-
-
-def _no_grad_trunc_normal_(tensor, mean, std, a, b):
- """Reference: https://people.sc.fsu.edu/~jburkardt/presentations
- /truncated_normal.pdf"""
-
- def norm_cdf(x):
- # Computes standard normal cumulative distribution function
- return (1. + math.erf(x / math.sqrt(2.))) / 2.
-
- if (mean < a - 2 * std) or (mean > b + 2 * std):
- warnings.warn(
- 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. '
- 'The distribution of values may be incorrect.',
- stacklevel=2)
-
- with torch.no_grad():
- # Values are generated by using a truncated uniform distribution and
- # then using the inverse CDF for the normal distribution.
- # Get upper and lower cdf values
- lower_bound = norm_cdf((a - mean) / std)
- upper_bound = norm_cdf((b - mean) / std)
-
- # Uniformly fill tensor with values from [l, u], then translate to
- # [2l-1, 2u-1].
- tensor.uniform_(2 * lower_bound - 1, 2 * upper_bound - 1)
-
- # Use inverse cdf transform for normal distribution to get truncated
- # standard normal
- tensor.erfinv_()
-
- # Transform to proper mean, std
- tensor.mul_(std * math.sqrt(2.))
- tensor.add_(mean)
-
- # Clamp to ensure it's in the proper range
- tensor.clamp_(min=a, max=b)
- return tensor
-
-
-def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.):
- r"""Fills the input Tensor with values drawn from a truncated
- normal distribution. The values are effectively drawn from the
- normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)`
- with values outside :math:`[a, b]` redrawn until they are within
- the bounds. The method used for generating the random values works
- best when :math:`a \leq \text{mean} \leq b`.
- Args:
- tensor (``torch.Tensor``): an n-dimensional `torch.Tensor`
- mean (float): the mean of the normal distribution
- std (float): the standard deviation of the normal distribution
- a (float): the minimum cutoff value
- b (float): the maximum cutoff value
- """
- return _no_grad_trunc_normal_(tensor, mean, std, a, b)
diff --git a/spaces/MercuryLeafer/img-to-music/share_btn.py b/spaces/MercuryLeafer/img-to-music/share_btn.py
deleted file mode 100644
index 351a8f6252414dc48fd9972867f875a002731c19..0000000000000000000000000000000000000000
--- a/spaces/MercuryLeafer/img-to-music/share_btn.py
+++ /dev/null
@@ -1,104 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
- async function getInputImgFile(imgEl){
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const isPng = imgEl.src.startsWith(`data:image/png`);
- if(isPng){
- const fileName = `sd-perception-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- }else{
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- }
- }
- async function getOutputMusicFile(audioEL){
- const res = await fetch(audioEL.src);
- const blob = await res.blob();
- const audioId = Date.now() % 200;
- const fileName = `img-to-music-${{audioId}}.wav`;
- const musicBlob = new File([blob], fileName, { type: 'audio/wav' });
- console.log(musicBlob);
- return musicBlob;
- }
-
- async function audioToBase64(audioFile) {
- return new Promise((resolve, reject) => {
- let reader = new FileReader();
- reader.readAsDataURL(audioFile);
- reader.onload = () => resolve(reader.result);
- reader.onerror = error => reject(error);
-
- });
- }
- const gradioEl = document.querySelector('body > gradio-app');
- // const gradioEl = document.querySelector("gradio-app").shadowRoot;
- const inputImgEl = gradioEl.querySelector('#input-img img');
- const prompts = gradioEl.querySelector('#prompts_out textarea').value;
- const outputMusic = gradioEl.querySelector('#music-output audio');
- const outputMusic_src = gradioEl.querySelector('#music-output audio').src;
- const outputMusic_name = outputMusic_src.split('/').pop();
- let titleTxt = outputMusic_name;
- //if(titleTxt.length > 100){
- // titleTxt = titleTxt.slice(0, 100) + ' ...';
- //}
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
- if(!outputMusic){
- return;
- };
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
- const inputFile = await getInputImgFile(inputImgEl);
- const urlInputImg = await uploadFile(inputFile);
- const musicFile = await getOutputMusicFile(outputMusic);
- const dataOutputMusic = await uploadFile(musicFile);
-
- const descriptionMd = `#### Input img:
-
-
-#### Prompts out:
-${prompts}
-
-#### Music:
-
-
-`;
- const params = new URLSearchParams({
- title: titleTxt,
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/fffiloni/img-to-music/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/Miuzarte/SUI-svc-4.0/README.md b/spaces/Miuzarte/SUI-svc-4.0/README.md
deleted file mode 100644
index 3f28cf165ca4552bfe2d787e14b47d3bc52673f1..0000000000000000000000000000000000000000
--- a/spaces/Miuzarte/SUI-svc-4.0/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: AI岁己(歌声变声器)第二代
-emoji: 🕊
-colorFrom: red
-colorTo: pink
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/layers/transformer_layers.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/layers/transformer_layers.py
deleted file mode 100644
index 8be138d5c5af89b96f27f3646b14a60302659105..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/layers/transformer_layers.py
+++ /dev/null
@@ -1,167 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch.nn as nn
-from mmengine.model import BaseModule
-
-from mmocr.models.common.modules import (MultiHeadAttention,
- PositionwiseFeedForward)
-
-
-class TFEncoderLayer(BaseModule):
- """Transformer Encoder Layer.
-
- Args:
- d_model (int): The number of expected features
- in the decoder inputs (default=512).
- d_inner (int): The dimension of the feedforward
- network model (default=256).
- n_head (int): The number of heads in the
- multiheadattention models (default=8).
- d_k (int): Total number of features in key.
- d_v (int): Total number of features in value.
- dropout (float): Dropout layer on attn_output_weights.
- qkv_bias (bool): Add bias in projection layer. Default: False.
- act_cfg (dict): Activation cfg for feedforward module.
- operation_order (tuple[str]): The execution order of operation
- in transformer. Such as ('self_attn', 'norm', 'ffn', 'norm')
- or ('norm', 'self_attn', 'norm', 'ffn').
- Default:None.
- """
-
- def __init__(self,
- d_model=512,
- d_inner=256,
- n_head=8,
- d_k=64,
- d_v=64,
- dropout=0.1,
- qkv_bias=False,
- act_cfg=dict(type='mmengine.GELU'),
- operation_order=None):
- super().__init__()
- self.attn = MultiHeadAttention(
- n_head, d_model, d_k, d_v, qkv_bias=qkv_bias, dropout=dropout)
- self.norm1 = nn.LayerNorm(d_model)
- self.mlp = PositionwiseFeedForward(
- d_model, d_inner, dropout=dropout, act_cfg=act_cfg)
- self.norm2 = nn.LayerNorm(d_model)
-
- self.operation_order = operation_order
- if self.operation_order is None:
- self.operation_order = ('norm', 'self_attn', 'norm', 'ffn')
-
- assert self.operation_order in [('norm', 'self_attn', 'norm', 'ffn'),
- ('self_attn', 'norm', 'ffn', 'norm')]
-
- def forward(self, x, mask=None):
- if self.operation_order == ('self_attn', 'norm', 'ffn', 'norm'):
- residual = x
- x = residual + self.attn(x, x, x, mask)
- x = self.norm1(x)
-
- residual = x
- x = residual + self.mlp(x)
- x = self.norm2(x)
- elif self.operation_order == ('norm', 'self_attn', 'norm', 'ffn'):
- residual = x
- x = self.norm1(x)
- x = residual + self.attn(x, x, x, mask)
-
- residual = x
- x = self.norm2(x)
- x = residual + self.mlp(x)
-
- return x
-
-
-class TFDecoderLayer(nn.Module):
- """Transformer Decoder Layer.
-
- Args:
- d_model (int): The number of expected features
- in the decoder inputs (default=512).
- d_inner (int): The dimension of the feedforward
- network model (default=256).
- n_head (int): The number of heads in the
- multiheadattention models (default=8).
- d_k (int): Total number of features in key.
- d_v (int): Total number of features in value.
- dropout (float): Dropout layer on attn_output_weights.
- qkv_bias (bool): Add bias in projection layer. Default: False.
- act_cfg (dict): Activation cfg for feedforward module.
- operation_order (tuple[str]): The execution order of operation
- in transformer. Such as ('self_attn', 'norm', 'enc_dec_attn',
- 'norm', 'ffn', 'norm') or ('norm', 'self_attn', 'norm',
- 'enc_dec_attn', 'norm', 'ffn').
- Default:None.
- """
-
- def __init__(self,
- d_model=512,
- d_inner=256,
- n_head=8,
- d_k=64,
- d_v=64,
- dropout=0.1,
- qkv_bias=False,
- act_cfg=dict(type='mmengine.GELU'),
- operation_order=None):
- super().__init__()
-
- self.norm1 = nn.LayerNorm(d_model)
- self.norm2 = nn.LayerNorm(d_model)
- self.norm3 = nn.LayerNorm(d_model)
-
- self.self_attn = MultiHeadAttention(
- n_head, d_model, d_k, d_v, dropout=dropout, qkv_bias=qkv_bias)
-
- self.enc_attn = MultiHeadAttention(
- n_head, d_model, d_k, d_v, dropout=dropout, qkv_bias=qkv_bias)
-
- self.mlp = PositionwiseFeedForward(
- d_model, d_inner, dropout=dropout, act_cfg=act_cfg)
-
- self.operation_order = operation_order
- if self.operation_order is None:
- self.operation_order = ('norm', 'self_attn', 'norm',
- 'enc_dec_attn', 'norm', 'ffn')
- assert self.operation_order in [
- ('norm', 'self_attn', 'norm', 'enc_dec_attn', 'norm', 'ffn'),
- ('self_attn', 'norm', 'enc_dec_attn', 'norm', 'ffn', 'norm')
- ]
-
- def forward(self,
- dec_input,
- enc_output,
- self_attn_mask=None,
- dec_enc_attn_mask=None):
- if self.operation_order == ('self_attn', 'norm', 'enc_dec_attn',
- 'norm', 'ffn', 'norm'):
- dec_attn_out = self.self_attn(dec_input, dec_input, dec_input,
- self_attn_mask)
- dec_attn_out += dec_input
- dec_attn_out = self.norm1(dec_attn_out)
-
- enc_dec_attn_out = self.enc_attn(dec_attn_out, enc_output,
- enc_output, dec_enc_attn_mask)
- enc_dec_attn_out += dec_attn_out
- enc_dec_attn_out = self.norm2(enc_dec_attn_out)
-
- mlp_out = self.mlp(enc_dec_attn_out)
- mlp_out += enc_dec_attn_out
- mlp_out = self.norm3(mlp_out)
- elif self.operation_order == ('norm', 'self_attn', 'norm',
- 'enc_dec_attn', 'norm', 'ffn'):
- dec_input_norm = self.norm1(dec_input)
- dec_attn_out = self.self_attn(dec_input_norm, dec_input_norm,
- dec_input_norm, self_attn_mask)
- dec_attn_out += dec_input
-
- enc_dec_attn_in = self.norm2(dec_attn_out)
- enc_dec_attn_out = self.enc_attn(enc_dec_attn_in, enc_output,
- enc_output, dec_enc_attn_mask)
- enc_dec_attn_out += dec_attn_out
-
- mlp_out = self.mlp(self.norm3(enc_dec_attn_out))
- mlp_out += enc_dec_attn_out
-
- return mlp_out
diff --git a/spaces/NATSpeech/PortaSpeech/data_gen/tts/binarizer_zh.py b/spaces/NATSpeech/PortaSpeech/data_gen/tts/binarizer_zh.py
deleted file mode 100644
index 7e47ae4b56ce0235bd06c02b88f1ddd942122772..0000000000000000000000000000000000000000
--- a/spaces/NATSpeech/PortaSpeech/data_gen/tts/binarizer_zh.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import numpy as np
-from data_gen.tts.base_binarizer import BaseBinarizer
-
-
-class ZhBinarizer(BaseBinarizer):
- @staticmethod
- def process_align(tg_fn, item):
- BaseBinarizer.process_align(tg_fn, item)
- # char-level pitch
- if 'f0' in item:
- ph_list = item['ph'].split(" ")
- item['f0_ph'] = np.array([0 for _ in item['f0']], dtype=float)
- char_start_idx = 0
- f0s_char = []
- for idx, (f0_, ph_idx) in enumerate(zip(item['f0'], item['mel2ph'])):
- is_pinyin = ph_list[ph_idx - 1][0].isalpha()
- if not is_pinyin or ph_idx - item['mel2ph'][idx - 1] > 1:
- if len(f0s_char) > 0:
- item['f0_ph'][char_start_idx:idx] = sum(f0s_char) / len(f0s_char)
- f0s_char = []
- char_start_idx = idx
- if not is_pinyin:
- char_start_idx += 1
- if f0_ > 0:
- f0s_char.append(f0_)
diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/albert/__init__.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/albert/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/model_training_utils.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/model_training_utils.py
deleted file mode 100644
index f0fe67615726906a6b1d3ef38a5ca9acfe8502de..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/model_training_utils.py
+++ /dev/null
@@ -1,572 +0,0 @@
-# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""A light weight utilities to train NLP models."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-import json
-import os
-import tempfile
-
-from absl import logging
-import tensorflow as tf
-from tensorflow.python.util import deprecation
-from official.staging.training import grad_utils
-from official.utils.misc import distribution_utils
-
-_SUMMARY_TXT = 'training_summary.txt'
-_MIN_SUMMARY_STEPS = 10
-
-
-def _should_export_checkpoint(strategy):
- return (not strategy) or strategy.extended.should_checkpoint
-
-
-def _should_export_summary(strategy):
- return (not strategy) or strategy.extended.should_save_summary
-
-
-def _save_checkpoint(strategy, checkpoint, model_dir, checkpoint_prefix):
- """Saves model to with provided checkpoint prefix."""
-
- if _should_export_checkpoint(strategy):
- checkpoint_path = os.path.join(model_dir, checkpoint_prefix)
- saved_path = checkpoint.save(checkpoint_path)
- logging.info('Saving model as TF checkpoint: %s', saved_path)
- else:
- # In multi worker training we need every worker to save checkpoint, because
- # variables can trigger synchronization on read and synchronization needs
- # all workers to participate. To avoid workers overriding each other we save
- # to a temporary directory on non-chief workers.
- tmp_dir = tempfile.mkdtemp()
- checkpoint.save(os.path.join(tmp_dir, 'ckpt'))
- tf.io.gfile.rmtree(tmp_dir)
- return
-
-
-def _get_input_iterator(input_fn, strategy):
- """Returns distributed dataset iterator."""
- # When training with TPU pods, datasets needs to be cloned across
- # workers. Since Dataset instance cannot be cloned in eager mode, we instead
- # pass callable that returns a dataset.
- if not callable(input_fn):
- raise ValueError('`input_fn` should be a closure that returns a dataset.')
- iterator = iter(
- strategy.experimental_distribute_datasets_from_function(input_fn))
- return iterator
-
-
-def _float_metric_value(metric):
- """Gets the value of a float-value keras metric."""
- return metric.result().numpy().astype(float)
-
-
-def steps_to_run(current_step, steps_per_epoch, steps_per_loop):
- """Calculates steps to run on device."""
- if steps_per_loop <= 0:
- raise ValueError('steps_per_loop should be positive integer.')
- if steps_per_loop == 1:
- return steps_per_loop
- remainder_in_epoch = current_step % steps_per_epoch
- if remainder_in_epoch != 0:
- return min(steps_per_epoch - remainder_in_epoch, steps_per_loop)
- else:
- return steps_per_loop
-
-
-def write_txt_summary(training_summary, summary_dir):
- """Writes a summary text file to record stats."""
- if not tf.io.gfile.exists(summary_dir):
- tf.io.gfile.mkdir(summary_dir)
- summary_path = os.path.join(summary_dir, _SUMMARY_TXT)
- with tf.io.gfile.GFile(summary_path, 'wb') as f:
- logging.info('Training Summary: \n%s', str(training_summary))
- f.write(json.dumps(training_summary, indent=4))
-
-
-@deprecation.deprecated(
- None, 'This function is deprecated. Please use Keras compile/fit instead.')
-def run_customized_training_loop(
- # pylint: disable=invalid-name
- _sentinel=None,
- # pylint: enable=invalid-name
- strategy=None,
- model_fn=None,
- loss_fn=None,
- scale_loss=True,
- model_dir=None,
- train_input_fn=None,
- steps_per_epoch=None,
- num_eval_per_epoch=1,
- steps_per_loop=None,
- epochs=1,
- eval_input_fn=None,
- eval_steps=None,
- metric_fn=None,
- init_checkpoint=None,
- custom_callbacks=None,
- run_eagerly=False,
- sub_model_export_name=None,
- explicit_allreduce=False,
- pre_allreduce_callbacks=None,
- post_allreduce_callbacks=None,
- train_summary_interval=0):
- """Run BERT pretrain model training using low-level API.
-
- Arguments:
- _sentinel: Used to prevent positional parameters. Internal, do not use.
- strategy: Distribution strategy on which to run low level training loop.
- model_fn: Function that returns a tuple (model, sub_model). Caller of this
- function should add optimizer to the `model` via calling
- `model.compile()` API or manually setting `model.optimizer` attribute.
- Second element of the returned tuple(sub_model) is an optional sub model
- to be used for initial checkpoint -- if provided.
- loss_fn: Function with signature func(labels, logits) and returns a loss
- tensor.
- scale_loss: Whether to divide the raw loss by number of replicas before
- gradients calculation.
- model_dir: Model directory used during training for restoring/saving model
- weights.
- train_input_fn: Function that returns a tf.data.Dataset used for training.
- steps_per_epoch: Number of steps to run per epoch. At the end of each
- epoch, model checkpoint will be saved and evaluation will be conducted
- if evaluation dataset is provided.
- num_eval_per_epoch: Number of evaluations per epoch.
- steps_per_loop: Number of steps per graph-mode loop. In order to reduce
- communication in eager context, training logs are printed every
- steps_per_loop.
- epochs: Number of epochs to train.
- eval_input_fn: Function that returns evaluation dataset. If none,
- evaluation is skipped.
- eval_steps: Number of steps to run evaluation. Required if `eval_input_fn`
- is not none.
- metric_fn: A metrics function that returns a Keras Metric object to record
- evaluation result using evaluation dataset or with training dataset
- after every epoch.
- init_checkpoint: Optional checkpoint to load to `sub_model` returned by
- `model_fn`.
- custom_callbacks: A list of Keras Callbacks objects to run during
- training. More specifically, `on_train_begin(), on_train_end(),
- on_batch_begin()`, `on_batch_end()`, `on_epoch_begin()`,
- `on_epoch_end()` methods are invoked during training.
- Note that some metrics may be missing from `logs`.
- run_eagerly: Whether to run model training in pure eager execution. This
- should be disable for TPUStrategy.
- sub_model_export_name: If not None, will export `sub_model` returned by
- `model_fn` into checkpoint files. The name of intermediate checkpoint
- file is {sub_model_export_name}_step_{step}.ckpt and the last
- checkpint's name is {sub_model_export_name}.ckpt; if None, `sub_model`
- will not be exported as checkpoint.
- explicit_allreduce: Whether to explicitly perform gradient allreduce,
- instead of relying on implicit allreduce in optimizer.apply_gradients().
- default is False. For now, if training using FP16 mixed precision,
- explicit allreduce will aggregate gradients in FP16 format. For TPU and
- GPU training using FP32, explicit allreduce will aggregate gradients in
- FP32 format.
- pre_allreduce_callbacks: A list of callback functions that takes gradients
- and model variables pairs as input, manipulate them, and returns a new
- gradients and model variables paris. The callback functions will be
- invoked in the list order and before gradients are allreduced. With
- mixed precision training, the pre_allreduce_allbacks will be applied on
- scaled_gradients. Default is no callbacks. Only used when
- explicit_allreduce=True.
- post_allreduce_callbacks: A list of callback functions that takes
- gradients and model variables pairs as input, manipulate them, and
- returns a new gradients and model variables paris. The callback
- functions will be invoked in the list order and right before gradients
- are applied to variables for updates. Default is no callbacks. Only used
- when explicit_allreduce=True.
- train_summary_interval: Step interval for training summaries. If the value
- is a negative number, then training summaries are not enabled.
-
- Returns:
- Trained model.
-
- Raises:
- ValueError: (1) When model returned by `model_fn` does not have optimizer
- attribute or when required parameters are set to none. (2) eval args are
- not specified correctly. (3) metric_fn must be a callable if specified.
- (4) sub_model_checkpoint_name is specified, but `sub_model` returned
- by `model_fn` is None.
- """
-
- if _sentinel is not None:
- raise ValueError('only call `run_customized_training_loop()` '
- 'with named arguments.')
-
- required_arguments = [
- strategy, model_fn, loss_fn, model_dir, steps_per_epoch, train_input_fn
- ]
-
- steps_between_evals = int(steps_per_epoch / num_eval_per_epoch)
- if [arg for arg in required_arguments if arg is None]:
- raise ValueError('`strategy`, `model_fn`, `loss_fn`, `model_dir`, '
- '`steps_per_epoch` and `train_input_fn` are required '
- 'parameters.')
- if not steps_per_loop:
- if tf.config.list_logical_devices('TPU'):
- # One can't fully utilize a TPU with steps_per_loop=1, so in this case
- # default users to a more useful value.
- steps_per_loop = min(1000, steps_between_evals)
- else:
- steps_per_loop = 1
- logging.info('steps_per_loop not specified. Using steps_per_loop=%d',
- steps_per_loop)
- if steps_per_loop > steps_between_evals:
- logging.warning(
- 'steps_per_loop: %d is specified to be greater than '
- ' steps_between_evals: %d, we will use steps_between_evals as'
- ' steps_per_loop.', steps_per_loop, steps_between_evals)
- steps_per_loop = steps_between_evals
- assert tf.executing_eagerly()
-
- if run_eagerly:
- if isinstance(strategy, tf.distribute.experimental.TPUStrategy):
- raise ValueError(
- 'TPUStrategy should not run eagerly as it heavily relies on graph'
- ' optimization for the distributed system.')
-
- if eval_input_fn and eval_steps is None:
- raise ValueError(
- '`eval_step` is required when `eval_input_fn ` is not none.')
- if metric_fn and not callable(metric_fn):
- raise ValueError(
- 'if `metric_fn` is specified, metric_fn must be a callable.')
-
- total_training_steps = steps_per_epoch * epochs
- train_iterator = _get_input_iterator(train_input_fn, strategy)
- eval_loss_metric = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
-
- with distribution_utils.get_strategy_scope(strategy):
- # To correctly place the model weights on accelerators,
- # model and optimizer should be created in scope.
- model, sub_model = model_fn()
- if not hasattr(model, 'optimizer'):
- raise ValueError('User should set optimizer attribute to model '
- 'inside `model_fn`.')
- if sub_model_export_name and sub_model is None:
- raise ValueError('sub_model_export_name is specified as %s, but '
- 'sub_model is None.' % sub_model_export_name)
-
- callback_list = tf.keras.callbacks.CallbackList(
- callbacks=custom_callbacks, model=model)
-
- optimizer = model.optimizer
-
- if init_checkpoint:
- logging.info(
- 'Checkpoint file %s found and restoring from '
- 'initial checkpoint for core model.', init_checkpoint)
- checkpoint = tf.train.Checkpoint(model=sub_model)
- checkpoint.restore(init_checkpoint).assert_existing_objects_matched()
- logging.info('Loading from checkpoint file completed')
-
- train_loss_metric = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
- eval_metrics = [metric_fn()] if metric_fn else []
- # If evaluation is required, make a copy of metric as it will be used by
- # both train and evaluation.
- train_metrics = [
- metric.__class__.from_config(metric.get_config())
- for metric in eval_metrics
- ]
-
- # Create summary writers
- if _should_export_summary(strategy):
- summary_dir = os.path.join(model_dir, 'summaries')
- else:
- # In multi worker training we need every worker to write summary, because
- # variables can trigger synchronization on read and synchronization needs
- # all workers to participate.
- summary_dir = tempfile.mkdtemp()
- eval_summary_writer = tf.summary.create_file_writer(
- os.path.join(summary_dir, 'eval'))
- last_summary_step = 0
- if steps_per_loop >= _MIN_SUMMARY_STEPS and train_summary_interval >= 0:
- # Only writes summary when the stats are collected sufficiently over
- # enough steps.
- train_summary_writer = tf.summary.create_file_writer(
- os.path.join(summary_dir, 'train'))
- else:
- train_summary_writer = tf.summary.create_noop_writer()
-
- # Collects training variables.
- training_vars = model.trainable_variables
-
- def _replicated_step(inputs):
- """Replicated training step."""
-
- inputs, labels = inputs
- with tf.GradientTape() as tape:
- model_outputs = model(inputs, training=True)
- loss = loss_fn(labels, model_outputs)
- # Raw loss is used for reporting in metrics/logs.
- raw_loss = loss
- if scale_loss:
- # Scales down the loss for gradients to be invariant from replicas.
- loss = loss / strategy.num_replicas_in_sync
-
- if explicit_allreduce:
- grad_utils.minimize_using_explicit_allreduce(tape, optimizer, loss,
- training_vars,
- pre_allreduce_callbacks,
- post_allreduce_callbacks)
- else:
- if isinstance(optimizer,
- tf.keras.mixed_precision.experimental.LossScaleOptimizer):
- with tape:
- scaled_loss = optimizer.get_scaled_loss(loss)
- scaled_grads = tape.gradient(scaled_loss, training_vars)
- grads = optimizer.get_unscaled_gradients(scaled_grads)
- else:
- grads = tape.gradient(loss, training_vars)
- optimizer.apply_gradients(zip(grads, training_vars))
- # For reporting, the metric takes the mean of losses.
- train_loss_metric.update_state(raw_loss)
- for metric in train_metrics:
- metric.update_state(labels, model_outputs)
-
- @tf.function
- def train_steps(iterator, steps):
- """Performs distributed training steps in a loop.
-
- Args:
- iterator: the distributed iterator of training datasets.
- steps: an tf.int32 integer tensor to specify number of steps to run
- inside host training loop.
-
- Raises:
- ValueError: Any of the arguments or tensor shapes are invalid.
- """
- if not isinstance(steps, tf.Tensor):
- raise ValueError('steps should be an Tensor. Python object may cause '
- 'retracing.')
-
- for _ in tf.range(steps):
- strategy.run(_replicated_step, args=(next(iterator),))
-
- def train_single_step(iterator):
- """Performs a distributed training step.
-
- Args:
- iterator: the distributed iterator of training datasets.
-
- Raises:
- ValueError: Any of the arguments or tensor shapes are invalid.
- """
- strategy.run(_replicated_step, args=(next(iterator),))
-
- def test_step(iterator):
- """Calculates evaluation metrics on distributed devices."""
-
- def _test_step_fn(inputs):
- """Replicated accuracy calculation."""
-
- inputs, labels = inputs
- model_outputs = model(inputs, training=False)
- for metric in eval_metrics:
- metric.update_state(labels, model_outputs)
- return model_outputs, labels
-
- outputs, labels = strategy.run(_test_step_fn, args=(next(iterator),))
- outputs = tf.nest.map_structure(strategy.experimental_local_results,
- outputs)
- labels = tf.nest.map_structure(strategy.experimental_local_results,
- labels)
- return outputs, labels
-
- if not run_eagerly:
- train_single_step = tf.function(train_single_step)
- test_step = tf.function(test_step)
-
- def _run_evaluation(current_training_step, test_iterator):
- """Runs validation steps and aggregate metrics.
-
- Args:
- current_training_step: tf.int32 tensor containing the current step.
- test_iterator: distributed iterator of test datasets.
-
- Returns:
- A dict of metic names and values.
- """
- # The last batch of the evaluation is often smaller than previous ones.
- # Moreover, in some distributed pieces it might even be empty. Therefore,
- # different from the way training_loss is calculated, it is needed to
- # gather all the logits and labels here to calculate the evaluation loss
- # outside.
- loss_list, loss_weights = list(), list()
- for _ in range(eval_steps):
- outputs, labels = test_step(test_iterator)
- for cur_logits, cur_labels in zip(outputs, labels):
- # This is to handle cases when cur_labels is not a single tensor,
- # but a dict of tensors.
- cur_weight = tf.shape(tf.nest.flatten(cur_labels)[0])[0]
- if cur_weight != 0:
- loss_list.append(loss_fn(cur_labels, cur_logits).numpy())
- loss_weights.append(cur_weight)
- # The sample_weights are the actual number of examples in each batch,
- # a summation of numbers of examples in each replica if using
- # distributed training.
- eval_loss_metric.update_state(loss_list, sample_weight=loss_weights)
-
- logs = {}
- with eval_summary_writer.as_default():
- for metric in [eval_loss_metric] + eval_metrics + model.metrics:
- metric_value = _float_metric_value(metric)
- logs[metric.name] = metric_value
- logging.info('Step: [%d] Validation %s = %f', current_training_step,
- metric.name, metric_value)
- tf.summary.scalar(
- metric.name, metric_value, step=current_training_step)
- eval_summary_writer.flush()
-
- return logs
-
- # Training loop starts here.
- checkpoint = tf.train.Checkpoint(
- model=model, optimizer=optimizer, global_step=optimizer.iterations)
- sub_model_checkpoint = tf.train.Checkpoint(
- model=sub_model,
- global_step=optimizer.iterations) if sub_model_export_name else None
-
- latest_checkpoint_file = tf.train.latest_checkpoint(model_dir)
- if latest_checkpoint_file:
- logging.info('Checkpoint file %s found and restoring from '
- 'checkpoint', latest_checkpoint_file)
- checkpoint.restore(latest_checkpoint_file)
- logging.info('Loading from checkpoint file completed')
-
- current_step = optimizer.iterations.numpy()
- checkpoint_name = 'ctl_step_{step}.ckpt'
-
- logs = {}
- callback_list.on_train_begin()
- while current_step < total_training_steps and not model.stop_training:
- if current_step % steps_per_epoch == 0:
- callback_list.on_epoch_begin(
- int(current_step / steps_per_epoch) + 1)
-
- # Training loss/metric are taking average over steps inside micro
- # training loop. We reset the their values before each round.
- train_loss_metric.reset_states()
- for metric in train_metrics + model.metrics:
- metric.reset_states()
-
- callback_list.on_batch_begin(current_step)
- # Runs several steps in the host while loop.
- steps = steps_to_run(current_step, steps_between_evals, steps_per_loop)
-
- if tf.config.list_physical_devices('GPU'):
- # TODO(zongweiz): merge with train_steps once tf.while_loop
- # GPU performance bugs are fixed.
- for _ in range(steps):
- train_single_step(train_iterator)
- else:
- # Converts steps to a Tensor to avoid tf.function retracing.
- train_steps(train_iterator, tf.convert_to_tensor(steps, dtype=tf.int32))
- train_loss = _float_metric_value(train_loss_metric)
- current_step += steps
-
- # Updates training logging.
- training_status = 'Train Step: %d/%d / loss = %s' % (
- current_step, total_training_steps, train_loss)
-
- if current_step >= last_summary_step + train_summary_interval:
- summary_writer = train_summary_writer
- last_summary_step = current_step
- else:
- summary_writer = tf.summary.create_noop_writer()
-
- with summary_writer.as_default():
- if callable(optimizer.learning_rate):
- tf.summary.scalar(
- 'learning_rate',
- optimizer.learning_rate(current_step),
- step=current_step)
- tf.summary.scalar(train_loss_metric.name, train_loss, step=current_step)
- for metric in train_metrics + model.metrics:
- metric_value = _float_metric_value(metric)
- training_status += ' %s = %f' % (metric.name, metric_value)
- tf.summary.scalar(metric.name, metric_value, step=current_step)
- summary_writer.flush()
- logging.info(training_status)
-
- # If no need for evaluation, we only call on_batch_end with train_loss,
- # this is to ensure we get granular global_step/sec on Tensorboard.
- if current_step % steps_between_evals:
- callback_list.on_batch_end(current_step - 1, {'loss': train_loss})
- else:
- # Save a submodel with the step in the file name after each epoch.
- if sub_model_export_name:
- _save_checkpoint(
- strategy, sub_model_checkpoint, model_dir,
- '%s_step_%d.ckpt' % (sub_model_export_name, current_step))
-
- # Save model checkpoints and run validation steps after each epoch
- # (with the exception of the final epoch which is handled after the
- # training loop).
- if current_step < total_training_steps:
- _save_checkpoint(strategy, checkpoint, model_dir,
- checkpoint_name.format(step=current_step))
- if eval_input_fn:
- logging.info('Running evaluation after step: %s.', current_step)
- logs = _run_evaluation(current_step,
- _get_input_iterator(eval_input_fn, strategy))
- # Re-initialize evaluation metric.
- eval_loss_metric.reset_states()
- for metric in eval_metrics + model.metrics:
- metric.reset_states()
- # We add train_loss here rather than call on_batch_end twice to make
- # sure that no duplicated values are generated.
- logs['loss'] = train_loss
- callback_list.on_batch_end(current_step - 1, logs)
-
- # Calls on_epoch_end after each real epoch ends to prevent mis-calculation
- # of training steps.
- if current_step % steps_per_epoch == 0:
- callback_list.on_epoch_end(int(current_step / steps_per_epoch), logs)
-
- if sub_model_export_name:
- _save_checkpoint(strategy, sub_model_checkpoint, model_dir,
- '%s.ckpt' % sub_model_export_name)
-
- _save_checkpoint(strategy, checkpoint, model_dir,
- checkpoint_name.format(step=current_step))
- if eval_input_fn:
- logging.info('Running final evaluation after training is complete.')
- logs = _run_evaluation(current_step,
- _get_input_iterator(eval_input_fn, strategy))
- callback_list.on_epoch_end(int(current_step / steps_per_epoch), logs)
- training_summary = {
- 'total_training_steps': total_training_steps,
- 'train_loss': _float_metric_value(train_loss_metric),
- }
- for metric in model.metrics:
- training_summary[metric.name] = _float_metric_value(metric)
- if eval_metrics:
- # TODO(hongkuny): Cleans up summary reporting in text.
- training_summary['last_train_metrics'] = _float_metric_value(
- train_metrics[0])
- training_summary['eval_metrics'] = _float_metric_value(eval_metrics[0])
-
- write_txt_summary(training_summary, summary_dir)
-
- if not _should_export_summary(strategy):
- tf.io.gfile.rmtree(summary_dir)
-
- callback_list.on_train_end()
-
- return model
diff --git a/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/layers.py b/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/layers.py
deleted file mode 100644
index be4c7a47e0871182d82310e07e5739c2fc9f8744..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/layers.py
+++ /dev/null
@@ -1,397 +0,0 @@
-# Copyright 2017 Google Inc. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Layers for VatxtModel."""
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-# Dependency imports
-
-from six.moves import xrange
-import tensorflow as tf
-K = tf.keras
-
-
-def cl_logits_subgraph(layer_sizes, input_size, num_classes, keep_prob=1.):
- """Construct multiple ReLU layers with dropout and a linear layer."""
- subgraph = K.models.Sequential(name='cl_logits')
- for i, layer_size in enumerate(layer_sizes):
- if i == 0:
- subgraph.add(
- K.layers.Dense(layer_size, activation='relu', input_dim=input_size))
- else:
- subgraph.add(K.layers.Dense(layer_size, activation='relu'))
-
- if keep_prob < 1.:
- subgraph.add(K.layers.Dropout(1. - keep_prob))
- subgraph.add(K.layers.Dense(1 if num_classes == 2 else num_classes))
- return subgraph
-
-
-class Embedding(K.layers.Layer):
- """Embedding layer with frequency-based normalization and dropout."""
-
- def __init__(self,
- vocab_size,
- embedding_dim,
- normalize=False,
- vocab_freqs=None,
- keep_prob=1.,
- **kwargs):
- self.vocab_size = vocab_size
- self.embedding_dim = embedding_dim
- self.normalized = normalize
- self.keep_prob = keep_prob
-
- if normalize:
- assert vocab_freqs is not None
- self.vocab_freqs = tf.constant(
- vocab_freqs, dtype=tf.float32, shape=(vocab_size, 1))
-
- super(Embedding, self).__init__(**kwargs)
-
- def build(self, input_shape):
- with tf.device('/cpu:0'):
- self.var = self.add_weight(
- shape=(self.vocab_size, self.embedding_dim),
- initializer=tf.random_uniform_initializer(-1., 1.),
- name='embedding',
- dtype=tf.float32)
-
- if self.normalized:
- self.var = self._normalize(self.var)
-
- super(Embedding, self).build(input_shape)
-
- def call(self, x):
- embedded = tf.nn.embedding_lookup(self.var, x)
- if self.keep_prob < 1.:
- shape = embedded.get_shape().as_list()
-
- # Use same dropout masks at each timestep with specifying noise_shape.
- # This slightly improves performance.
- # Please see https://arxiv.org/abs/1512.05287 for the theoretical
- # explanation.
- embedded = tf.nn.dropout(
- embedded, self.keep_prob, noise_shape=(shape[0], 1, shape[2]))
- return embedded
-
- def _normalize(self, emb):
- weights = self.vocab_freqs / tf.reduce_sum(self.vocab_freqs)
- mean = tf.reduce_sum(weights * emb, 0, keep_dims=True)
- var = tf.reduce_sum(weights * tf.pow(emb - mean, 2.), 0, keep_dims=True)
- stddev = tf.sqrt(1e-6 + var)
- return (emb - mean) / stddev
-
-
-class LSTM(object):
- """LSTM layer using dynamic_rnn.
-
- Exposes variables in `trainable_weights` property.
- """
-
- def __init__(self, cell_size, num_layers=1, keep_prob=1., name='LSTM'):
- self.cell_size = cell_size
- self.num_layers = num_layers
- self.keep_prob = keep_prob
- self.reuse = None
- self.trainable_weights = None
- self.name = name
-
- def __call__(self, x, initial_state, seq_length):
- with tf.variable_scope(self.name, reuse=self.reuse) as vs:
- cell = tf.contrib.rnn.MultiRNNCell([
- tf.contrib.rnn.BasicLSTMCell(
- self.cell_size,
- forget_bias=0.0,
- reuse=tf.get_variable_scope().reuse)
- for _ in xrange(self.num_layers)
- ])
-
- # shape(x) = (batch_size, num_timesteps, embedding_dim)
-
- lstm_out, next_state = tf.nn.dynamic_rnn(
- cell, x, initial_state=initial_state, sequence_length=seq_length)
-
- # shape(lstm_out) = (batch_size, timesteps, cell_size)
-
- if self.keep_prob < 1.:
- lstm_out = tf.nn.dropout(lstm_out, self.keep_prob)
-
- if self.reuse is None:
- self.trainable_weights = vs.global_variables()
-
- self.reuse = True
-
- return lstm_out, next_state
-
-
-class SoftmaxLoss(K.layers.Layer):
- """Softmax xentropy loss with candidate sampling."""
-
- def __init__(self,
- vocab_size,
- num_candidate_samples=-1,
- vocab_freqs=None,
- **kwargs):
- self.vocab_size = vocab_size
- self.num_candidate_samples = num_candidate_samples
- self.vocab_freqs = vocab_freqs
- super(SoftmaxLoss, self).__init__(**kwargs)
- self.multiclass_dense_layer = K.layers.Dense(self.vocab_size)
-
- def build(self, input_shape):
- input_shape = input_shape[0].as_list()
- with tf.device('/cpu:0'):
- self.lin_w = self.add_weight(
- shape=(input_shape[-1], self.vocab_size),
- name='lm_lin_w',
- initializer=K.initializers.glorot_uniform())
- self.lin_b = self.add_weight(
- shape=(self.vocab_size,),
- name='lm_lin_b',
- initializer=K.initializers.glorot_uniform())
- self.multiclass_dense_layer.build(input_shape)
-
- super(SoftmaxLoss, self).build(input_shape)
-
- def call(self, inputs):
- x, labels, weights = inputs
- if self.num_candidate_samples > -1:
- assert self.vocab_freqs is not None
- labels_reshaped = tf.reshape(labels, [-1])
- labels_reshaped = tf.expand_dims(labels_reshaped, -1)
- sampled = tf.nn.fixed_unigram_candidate_sampler(
- true_classes=labels_reshaped,
- num_true=1,
- num_sampled=self.num_candidate_samples,
- unique=True,
- range_max=self.vocab_size,
- unigrams=self.vocab_freqs)
- inputs_reshaped = tf.reshape(x, [-1, int(x.get_shape()[2])])
-
- lm_loss = tf.nn.sampled_softmax_loss(
- weights=tf.transpose(self.lin_w),
- biases=self.lin_b,
- labels=labels_reshaped,
- inputs=inputs_reshaped,
- num_sampled=self.num_candidate_samples,
- num_classes=self.vocab_size,
- sampled_values=sampled)
- lm_loss = tf.reshape(
- lm_loss,
- [int(x.get_shape()[0]), int(x.get_shape()[1])])
- else:
- logits = self.multiclass_dense_layer(x)
- lm_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
- logits=logits, labels=labels)
-
- lm_loss = tf.identity(
- tf.reduce_sum(lm_loss * weights) / _num_labels(weights),
- name='lm_xentropy_loss')
- return lm_loss
-
-
-def classification_loss(logits, labels, weights):
- """Computes cross entropy loss between logits and labels.
-
- Args:
- logits: 2-D [timesteps*batch_size, m] float tensor, where m=1 if
- num_classes=2, otherwise m=num_classes.
- labels: 1-D [timesteps*batch_size] integer tensor.
- weights: 1-D [timesteps*batch_size] float tensor.
-
- Returns:
- Loss scalar of type float.
- """
- inner_dim = logits.get_shape().as_list()[-1]
- with tf.name_scope('classifier_loss'):
- # Logistic loss
- if inner_dim == 1:
- loss = tf.nn.sigmoid_cross_entropy_with_logits(
- logits=tf.squeeze(logits, -1), labels=tf.cast(labels, tf.float32))
- # Softmax loss
- else:
- loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
- logits=logits, labels=labels)
-
- num_lab = _num_labels(weights)
- tf.summary.scalar('num_labels', num_lab)
- return tf.identity(
- tf.reduce_sum(weights * loss) / num_lab, name='classification_xentropy')
-
-
-def accuracy(logits, targets, weights):
- """Computes prediction accuracy.
-
- Args:
- logits: 2-D classifier logits [timesteps*batch_size, num_classes]
- targets: 1-D [timesteps*batch_size] integer tensor.
- weights: 1-D [timesteps*batch_size] float tensor.
-
- Returns:
- Accuracy: float scalar.
- """
- with tf.name_scope('accuracy'):
- eq = tf.cast(tf.equal(predictions(logits), targets), tf.float32)
- return tf.identity(
- tf.reduce_sum(weights * eq) / _num_labels(weights), name='accuracy')
-
-
-def predictions(logits):
- """Class prediction from logits."""
- inner_dim = logits.get_shape().as_list()[-1]
- with tf.name_scope('predictions'):
- # For binary classification
- if inner_dim == 1:
- pred = tf.cast(tf.greater(tf.squeeze(logits, -1), 0.), tf.int64)
- # For multi-class classification
- else:
- pred = tf.argmax(logits, 2)
- return pred
-
-
-def _num_labels(weights):
- """Number of 1's in weights. Returns 1. if 0."""
- num_labels = tf.reduce_sum(weights)
- num_labels = tf.where(tf.equal(num_labels, 0.), 1., num_labels)
- return num_labels
-
-
-def optimize(loss,
- global_step,
- max_grad_norm,
- lr,
- lr_decay,
- sync_replicas=False,
- replicas_to_aggregate=1,
- task_id=0):
- """Builds optimization graph.
-
- * Creates an optimizer, and optionally wraps with SyncReplicasOptimizer
- * Computes, clips, and applies gradients
- * Maintains moving averages for all trainable variables
- * Summarizes variables and gradients
-
- Args:
- loss: scalar loss to minimize.
- global_step: integer scalar Variable.
- max_grad_norm: float scalar. Grads will be clipped to this value.
- lr: float scalar, learning rate.
- lr_decay: float scalar, learning rate decay rate.
- sync_replicas: bool, whether to use SyncReplicasOptimizer.
- replicas_to_aggregate: int, number of replicas to aggregate when using
- SyncReplicasOptimizer.
- task_id: int, id of the current task; used to ensure proper initialization
- of SyncReplicasOptimizer.
-
- Returns:
- train_op
- """
- with tf.name_scope('optimization'):
- # Compute gradients.
- tvars = tf.trainable_variables()
- grads = tf.gradients(
- loss,
- tvars,
- aggregation_method=tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N)
-
- # Clip non-embedding grads
- non_embedding_grads_and_vars = [(g, v) for (g, v) in zip(grads, tvars)
- if 'embedding' not in v.op.name]
- embedding_grads_and_vars = [(g, v) for (g, v) in zip(grads, tvars)
- if 'embedding' in v.op.name]
-
- ne_grads, ne_vars = zip(*non_embedding_grads_and_vars)
- ne_grads, _ = tf.clip_by_global_norm(ne_grads, max_grad_norm)
- non_embedding_grads_and_vars = zip(ne_grads, ne_vars)
-
- grads_and_vars = embedding_grads_and_vars + list(non_embedding_grads_and_vars)
-
- # Summarize
- _summarize_vars_and_grads(grads_and_vars)
-
- # Decaying learning rate
- lr = tf.train.exponential_decay(
- lr, global_step, 1, lr_decay, staircase=True)
- tf.summary.scalar('learning_rate', lr)
- opt = tf.train.AdamOptimizer(lr)
-
- # Track the moving averages of all trainable variables.
- variable_averages = tf.train.ExponentialMovingAverage(0.999, global_step)
-
- # Apply gradients
- if sync_replicas:
- opt = tf.train.SyncReplicasOptimizer(
- opt,
- replicas_to_aggregate,
- variable_averages=variable_averages,
- variables_to_average=tvars,
- total_num_replicas=replicas_to_aggregate)
- apply_gradient_op = opt.apply_gradients(
- grads_and_vars, global_step=global_step)
- with tf.control_dependencies([apply_gradient_op]):
- train_op = tf.no_op(name='train_op')
-
- # Initialization ops
- tf.add_to_collection(tf.GraphKeys.QUEUE_RUNNERS,
- opt.get_chief_queue_runner())
- if task_id == 0: # Chief task
- local_init_op = opt.chief_init_op
- tf.add_to_collection('chief_init_op', opt.get_init_tokens_op())
- else:
- local_init_op = opt.local_step_init_op
- tf.add_to_collection('local_init_op', local_init_op)
- tf.add_to_collection('ready_for_local_init_op',
- opt.ready_for_local_init_op)
- else:
- # Non-sync optimizer
- apply_gradient_op = opt.apply_gradients(grads_and_vars, global_step)
- with tf.control_dependencies([apply_gradient_op]):
- train_op = variable_averages.apply(tvars)
-
- return train_op
-
-
-def _summarize_vars_and_grads(grads_and_vars):
- tf.logging.info('Trainable variables:')
- tf.logging.info('-' * 60)
- for grad, var in grads_and_vars:
- tf.logging.info(var)
-
- def tag(name, v=var):
- return v.op.name + '_' + name
-
- # Variable summary
- mean = tf.reduce_mean(var)
- tf.summary.scalar(tag('mean'), mean)
- with tf.name_scope(tag('stddev')):
- stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
- tf.summary.scalar(tag('stddev'), stddev)
- tf.summary.scalar(tag('max'), tf.reduce_max(var))
- tf.summary.scalar(tag('min'), tf.reduce_min(var))
- tf.summary.histogram(tag('histogram'), var)
-
- # Gradient summary
- if grad is not None:
- if isinstance(grad, tf.IndexedSlices):
- grad_values = grad.values
- else:
- grad_values = grad
-
- tf.summary.histogram(tag('gradient'), grad_values)
- tf.summary.scalar(tag('gradient_norm'), tf.global_norm([grad_values]))
- else:
- tf.logging.info('Var %s has no gradient', var.op.name)
diff --git a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/metrics.py b/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/metrics.py
deleted file mode 100644
index 9e2a6a7579812583dc60546f97976f05befe07ff..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/attention_ocr/python/metrics.py
+++ /dev/null
@@ -1,90 +0,0 @@
-# Copyright 2017 The TensorFlow Authors All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-
-"""Quality metrics for the model."""
-
-import tensorflow as tf
-
-
-def char_accuracy(predictions, targets, rej_char, streaming=False):
- """Computes character level accuracy.
-
- Both predictions and targets should have the same shape
- [batch_size x seq_length].
-
- Args:
- predictions: predicted characters ids.
- targets: ground truth character ids.
- rej_char: the character id used to mark an empty element (end of sequence).
- streaming: if True, uses the streaming mean from the slim.metric module.
-
- Returns:
- a update_ops for execution and value tensor whose value on evaluation
- returns the total character accuracy.
- """
- with tf.variable_scope('CharAccuracy'):
- predictions.get_shape().assert_is_compatible_with(targets.get_shape())
-
- targets = tf.to_int32(targets)
- const_rej_char = tf.constant(rej_char, shape=targets.get_shape())
- weights = tf.to_float(tf.not_equal(targets, const_rej_char))
- correct_chars = tf.to_float(tf.equal(predictions, targets))
- accuracy_per_example = tf.div(
- tf.reduce_sum(tf.multiply(correct_chars, weights), 1),
- tf.reduce_sum(weights, 1))
- if streaming:
- return tf.contrib.metrics.streaming_mean(accuracy_per_example)
- else:
- return tf.reduce_mean(accuracy_per_example)
-
-
-def sequence_accuracy(predictions, targets, rej_char, streaming=False):
- """Computes sequence level accuracy.
-
- Both input tensors should have the same shape: [batch_size x seq_length].
-
- Args:
- predictions: predicted character classes.
- targets: ground truth character classes.
- rej_char: the character id used to mark empty element (end of sequence).
- streaming: if True, uses the streaming mean from the slim.metric module.
-
- Returns:
- a update_ops for execution and value tensor whose value on evaluation
- returns the total sequence accuracy.
- """
-
- with tf.variable_scope('SequenceAccuracy'):
- predictions.get_shape().assert_is_compatible_with(targets.get_shape())
-
- targets = tf.to_int32(targets)
- const_rej_char = tf.constant(
- rej_char, shape=targets.get_shape(), dtype=tf.int32)
- include_mask = tf.not_equal(targets, const_rej_char)
- include_predictions = tf.to_int32(
- tf.where(include_mask, predictions,
- tf.zeros_like(predictions) + rej_char))
- correct_chars = tf.to_float(tf.equal(include_predictions, targets))
- correct_chars_counts = tf.cast(
- tf.reduce_sum(correct_chars, reduction_indices=[1]), dtype=tf.int32)
- target_length = targets.get_shape().dims[1].value
- target_chars_counts = tf.constant(
- target_length, shape=correct_chars_counts.get_shape())
- accuracy_per_example = tf.to_float(
- tf.equal(correct_chars_counts, target_chars_counts))
- if streaming:
- return tf.contrib.metrics.streaming_mean(accuracy_per_example)
- else:
- return tf.reduce_mean(accuracy_per_example)
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_multi_corpus_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_multi_corpus_dataset.py
deleted file mode 100644
index 5a79f4b680e5bc2c7374ec6dd8ea525c47b40985..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_multi_corpus_dataset.py
+++ /dev/null
@@ -1,79 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-from collections import OrderedDict
-
-import torch
-from fairseq.data import LanguagePairDataset, TokenBlockDataset
-from fairseq.data.multi_corpus_dataset import MultiCorpusDataset
-from tests.test_train import mock_dict
-
-
-class TestMultiCorpusDataset(unittest.TestCase):
- def setUp(self):
- d = mock_dict()
- tokens_1 = torch.LongTensor([i for i in range(1, 5000, 2)]).view(1, -1)
- tokens_ds1 = TokenBlockDataset(
- tokens_1,
- sizes=[tokens_1.size(-1)],
- block_size=1,
- pad=0,
- eos=1,
- include_targets=False,
- )
- self.dataset_1 = LanguagePairDataset(
- tokens_ds1, tokens_ds1.sizes, d, shuffle=False
- )
- tokens_2 = torch.LongTensor([i for i in range(0, 5000, 2)]).view(1, -1)
- tokens_ds2 = TokenBlockDataset(
- tokens_2,
- sizes=[tokens_2.size(-1)],
- block_size=1,
- pad=0,
- eos=1,
- include_targets=False,
- )
- self.dataset_2 = LanguagePairDataset(
- tokens_ds2, tokens_ds2.sizes, d, shuffle=False
- )
-
- def _test_sample_helper(
- self,
- distribution,
- ):
- m = MultiCorpusDataset(
- OrderedDict({0: self.dataset_1, 1: self.dataset_2}),
- distribution=distribution,
- seed=0,
- sort_indices=True,
- )
- m.set_epoch(1)
- indices = m.ordered_indices()
- count_sample_from_first_dataset = 0
- items = set()
- for i in indices:
- item = m[i]["source"].item()
- if item % 2 == 1:
- count_sample_from_first_dataset += 1
-
- items.add(item)
- sample_from_first_ds_percentage = (
- 1.0 * count_sample_from_first_dataset / len(indices)
- )
- self.assertLess(
- abs(sample_from_first_ds_percentage - distribution[0]),
- 0.01,
- )
- self.assertEqual(
- len(items),
- int(min(len(self.dataset_1), len(indices) * distribution[0])
- + min(len(self.dataset_1), len(indices) * distribution[1]))
- )
- print(distribution)
-
- def test_multi_corpus_dataset(self):
- for distribution in [[0.5, 0.5], [0.1, 0.9], [0.9, 0.1]]:
- self._test_sample_helper(distribution=distribution)
diff --git a/spaces/OFA-Sys/OFA-vqa/utils/eval_utils.py b/spaces/OFA-Sys/OFA-vqa/utils/eval_utils.py
deleted file mode 100644
index f84008d24aedf755f0c3b8c0888dcc8ca1dabbf4..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/utils/eval_utils.py
+++ /dev/null
@@ -1,136 +0,0 @@
-import string
-import math
-
-import torch
-
-from data import data_utils
-
-
-def get_symbols_to_strip_from_output(generator):
- if hasattr(generator, "symbols_to_strip_from_output"):
- return generator.symbols_to_strip_from_output
- else:
- return {generator.bos, generator.eos}
-
-
-def decode_fn(x, tgt_dict, bpe, generator, tokenizer=None):
- x = tgt_dict.string(x.int().cpu(), extra_symbols_to_ignore=get_symbols_to_strip_from_output(generator))
- if bpe is not None:
- x = bpe.decode(x)
- if tokenizer is not None:
- x = tokenizer.decode(x)
- return x
-
-
-def eval_caption(task, generator, models, sample):
- transtab = str.maketrans({key: None for key in string.punctuation})
- hypos = task.inference_step(generator, models, sample)
- results = []
- for i, sample_id in enumerate(sample["id"].tolist()):
- detok_hypo_str = decode_fn(hypos[i][0]["tokens"], task.tgt_dict, task.bpe, generator)
- results.append({"image_id": str(sample_id), "caption": detok_hypo_str.translate(transtab).strip()})
- return results, None
-
-
-def eval_vqa_gen(task, generator, models, sample):
- encoder_out = models[0].encoder(
- sample["net_input"]["src_tokens"],
- src_lengths=sample["net_input"]["src_lengths"],
- patch_images=sample["net_input"]["patch_images"],
- patch_masks=sample["net_input"]["patch_masks"]
- )
- device = sample["net_input"]["src_tokens"].device
- eos_item = torch.tensor([task.src_dict.eos()])
- pad = task.src_dict.pad()
- valid_result = []
- for valid_answers, valid_constraint_masks in zip(task.valid_answers_list, task.valid_constraint_masks_list):
- valid_size = len(valid_answers)
- valid_tgt_items = [
- torch.cat([torch.tensor(decoder_prompt[1:]), valid_answer, eos_item])
- for decoder_prompt in sample["decoder_prompts"] for valid_answer in valid_answers
- ]
- valid_prev_items = [
- torch.cat([torch.tensor(decoder_prompt), valid_answer])
- for decoder_prompt in sample["decoder_prompts"] for valid_answer in valid_answers
- ]
- valid_constraint_mask_items = [
- torch.cat(
- [torch.zeros(len(decoder_prompt) - 1, valid_constraint_mask.size(1)).bool(), valid_constraint_mask],
- dim=0
- )
- for decoder_prompt in sample["decoder_prompts"] for valid_constraint_mask in valid_constraint_masks
- ]
- valid_tgt = data_utils.collate_tokens(valid_tgt_items, pad_idx=pad).to(device)
- valid_prev_output = data_utils.collate_tokens(valid_prev_items, pad_idx=pad).to(device)
- valid_constraint_masks = data_utils.collate_tokens(valid_constraint_mask_items, pad_idx=pad).to(device)
-
- new_encoder_out = {}
- new_encoder_out["encoder_out"] = [
- encoder_out["encoder_out"][0].repeat_interleave(valid_size, dim=1)
- ]
- new_encoder_out["encoder_padding_mask"] = [
- encoder_out["encoder_padding_mask"][0].repeat_interleave(valid_size, dim=0)
- ]
- new_encoder_out["position_embeddings"] = [
- encoder_out["position_embeddings"][0].repeat_interleave(valid_size, dim=0)
- ]
-
- decoder_out = models[0].decoder(valid_prev_output, encoder_out=new_encoder_out)
- decoder_out[0].masked_fill_(~valid_constraint_masks, -math.inf)
- lprobs = models[0].get_normalized_probs(decoder_out, log_probs=True)
- scores = lprobs.gather(dim=-1, index=valid_tgt.unsqueeze(-1)).squeeze(-1)
- scores = scores.masked_fill(valid_tgt.eq(task.tgt_dict.pad()), 0)
- scores = scores.masked_fill((~valid_constraint_masks).all(2), 0)
- scores = scores.sum(1)
- scores = scores.view(-1, valid_size)
- valid_result.append(scores)
- valid_result = torch.cat(valid_result, dim=-1)
- predicts = valid_result.argmax(1).tolist()
- hyps = [task.index2ans[predict_index] for predict_index in predicts]
- results = [{"question_id": int(id), "answer": hyp} for id, hyp in zip(sample["id"].tolist(), hyps)]
- scores = [ref_dict.get(hyp, 0) for ref_dict, hyp in zip(sample['ref_dict'], hyps)]
- return results, scores
-
-
-def eval_refcoco(task, generator, models, sample):
- def _calculate_ap_score(hyps, refs, thresh=0.5):
- interacts = torch.cat(
- [torch.where(hyps[:, :2] < refs[:, :2], refs[:, :2], hyps[:, :2]),
- torch.where(hyps[:, 2:] < refs[:, 2:], hyps[:, 2:], refs[:, 2:])],
- dim=1
- )
- area_predictions = (hyps[:, 2] - hyps[:, 0]) * (hyps[:, 3] - hyps[:, 1])
- area_targets = (refs[:, 2] - refs[:, 0]) * (refs[:, 3] - refs[:, 1])
- interacts_w = interacts[:, 2] - interacts[:, 0]
- interacts_h = interacts[:, 3] - interacts[:, 1]
- area_interacts = interacts_w * interacts_h
- ious = area_interacts / (area_predictions + area_targets - area_interacts + 1e-6)
- return ((ious >= thresh) & (interacts_w > 0) & (interacts_h > 0)).float()
-
- gen_out = task.inference_step(generator, models, sample)
- hyps = []
- for i in range(len(gen_out)):
- hyps.append(gen_out[i][0]["tokens"][:-1] - len(task.src_dict) + task.cfg.num_bins)
- hyps = torch.stack(hyps, dim=0)
- hyps = hyps / (task.cfg.num_bins - 1) * task.cfg.max_image_size
- hyps[:, ::2] /= sample['w_resize_ratios'].unsqueeze(1)
- hyps[:, 1::2] /= sample['h_resize_ratios'].unsqueeze(1)
-
- results = [
- {"uniq_id": sample_id,
- "box": [hyps[i][0].item(), hyps[i][1].item(), hyps[i][2].item(), hyps[i][3].item()]}
- for i, sample_id in enumerate(sample["id"].tolist())
- ]
- scores = _calculate_ap_score(hyps, sample['region_coords'].float())
- return results, scores
-
-
-def eval_step(task, generator, models, sample):
- if task.cfg._name == 'caption':
- return eval_caption(task, generator, models, sample)
- elif task.cfg._name == 'vqa_gen':
- return eval_vqa_gen(task, generator, models, sample)
- elif task.cfg._name == 'refcoco':
- return eval_refcoco(task, generator, models, sample)
- else:
- raise NotImplementedError
diff --git a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/llmriddles/questions/executor.py b/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/llmriddles/questions/executor.py
deleted file mode 100644
index 61dafa769808626ef0f179fed4f6bf45979e8252..0000000000000000000000000000000000000000
--- a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTCN/llmriddles/questions/executor.py
+++ /dev/null
@@ -1,35 +0,0 @@
-from typing import Tuple
-
-from .question import Question
-from ..llms import get_llm_fn
-
-
-class QuestionExecutor:
- def __init__(self, question: Question, lang: str = 'cn', llm: str = 'chatgpt', llm_cfgs=None):
- self.question = question
- self.lang = lang
- self.llm = llm
- self.llm_cfgs = dict(llm_cfgs or {})
-
- @property
- def question_text(self):
- return self.question.texts[self.lang]
-
- @property
- def question_name(self):
- return self.question.names[self.lang]
-
- def check(self, qs_text: str) -> Tuple[str, bool, str]:
- answer_text = get_llm_fn(self.llm)(qs_text, **self.llm_cfgs)
- correct, explanation = self.check_answer(qs_text, answer_text)
- return answer_text, correct, explanation
-
- def check_answer(self, user_text: str, answer_text: str) -> Tuple[bool, str]:
- correct, explanation = self.question.checker(self.question_text, user_text, answer_text, self.lang)
- if explanation is None:
- if correct:
- explanation = 'LLM的回答满足要求' if self.lang == 'cn' else 'Correct Answer From LLM'
- else:
- explanation = 'LLM的回答不满足要求' if self.lang == 'cn' else 'Wrong Answer From LLM'
-
- return correct, explanation
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/box_head.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/box_head.py
deleted file mode 100644
index 5d0370b0400d9268f13c905e4096a84ce42e9bfd..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/box_head.py
+++ /dev/null
@@ -1,118 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import numpy as np
-from typing import List
-import fvcore.nn.weight_init as weight_init
-import torch
-from torch import nn
-
-from detectron2.config import configurable
-from detectron2.layers import Conv2d, ShapeSpec, get_norm
-from detectron2.utils.registry import Registry
-
-__all__ = ["FastRCNNConvFCHead", "build_box_head", "ROI_BOX_HEAD_REGISTRY"]
-
-ROI_BOX_HEAD_REGISTRY = Registry("ROI_BOX_HEAD")
-ROI_BOX_HEAD_REGISTRY.__doc__ = """
-Registry for box heads, which make box predictions from per-region features.
-
-The registered object will be called with `obj(cfg, input_shape)`.
-"""
-
-
-# To get torchscript support, we make the head a subclass of `nn.Sequential`.
-# Therefore, to add new layers in this head class, please make sure they are
-# added in the order they will be used in forward().
-@ROI_BOX_HEAD_REGISTRY.register()
-class FastRCNNConvFCHead(nn.Sequential):
- """
- A head with several 3x3 conv layers (each followed by norm & relu) and then
- several fc layers (each followed by relu).
- """
-
- @configurable
- def __init__(
- self, input_shape: ShapeSpec, *, conv_dims: List[int], fc_dims: List[int], conv_norm=""
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- input_shape (ShapeSpec): shape of the input feature.
- conv_dims (list[int]): the output dimensions of the conv layers
- fc_dims (list[int]): the output dimensions of the fc layers
- conv_norm (str or callable): normalization for the conv layers.
- See :func:`detectron2.layers.get_norm` for supported types.
- """
- super().__init__()
- assert len(conv_dims) + len(fc_dims) > 0
-
- self._output_size = (input_shape.channels, input_shape.height, input_shape.width)
-
- self.conv_norm_relus = []
- for k, conv_dim in enumerate(conv_dims):
- conv = Conv2d(
- self._output_size[0],
- conv_dim,
- kernel_size=3,
- padding=1,
- bias=not conv_norm,
- norm=get_norm(conv_norm, conv_dim),
- activation=nn.ReLU(),
- )
- self.add_module("conv{}".format(k + 1), conv)
- self.conv_norm_relus.append(conv)
- self._output_size = (conv_dim, self._output_size[1], self._output_size[2])
-
- self.fcs = []
- for k, fc_dim in enumerate(fc_dims):
- if k == 0:
- self.add_module("flatten", nn.Flatten())
- fc = nn.Linear(int(np.prod(self._output_size)), fc_dim)
- self.add_module("fc{}".format(k + 1), fc)
- self.add_module("fc_relu{}".format(k + 1), nn.ReLU())
- self.fcs.append(fc)
- self._output_size = fc_dim
-
- for layer in self.conv_norm_relus:
- weight_init.c2_msra_fill(layer)
- for layer in self.fcs:
- weight_init.c2_xavier_fill(layer)
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- num_conv = cfg.MODEL.ROI_BOX_HEAD.NUM_CONV
- conv_dim = cfg.MODEL.ROI_BOX_HEAD.CONV_DIM
- num_fc = cfg.MODEL.ROI_BOX_HEAD.NUM_FC
- fc_dim = cfg.MODEL.ROI_BOX_HEAD.FC_DIM
- return {
- "input_shape": input_shape,
- "conv_dims": [conv_dim] * num_conv,
- "fc_dims": [fc_dim] * num_fc,
- "conv_norm": cfg.MODEL.ROI_BOX_HEAD.NORM,
- }
-
- def forward(self, x):
- for layer in self:
- x = layer(x)
- return x
-
- @property
- @torch.jit.unused
- def output_shape(self):
- """
- Returns:
- ShapeSpec: the output feature shape
- """
- o = self._output_size
- if isinstance(o, int):
- return ShapeSpec(channels=o)
- else:
- return ShapeSpec(channels=o[0], height=o[1], width=o[2])
-
-
-def build_box_head(cfg, input_shape):
- """
- Build a box head defined by `cfg.MODEL.ROI_BOX_HEAD.NAME`.
- """
- name = cfg.MODEL.ROI_BOX_HEAD.NAME
- return ROI_BOX_HEAD_REGISTRY.get(name)(cfg, input_shape)
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/test_instantiate_config.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/test_instantiate_config.py
deleted file mode 100644
index b76f71b9a206cb59006765803c96713cb990d22c..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/config/test_instantiate_config.py
+++ /dev/null
@@ -1,100 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import os
-import tempfile
-import unittest
-import yaml
-from omegaconf import OmegaConf
-from omegaconf import __version__ as oc_version
-from dataclasses import dataclass
-
-from detectron2.config import instantiate, LazyCall as L
-from detectron2.layers import ShapeSpec
-
-OC_VERSION = tuple(int(x) for x in oc_version.split(".")[:2])
-
-
-class TestClass:
- def __init__(self, int_arg, list_arg=None, dict_arg=None, extra_arg=None):
- self.int_arg = int_arg
- self.list_arg = list_arg
- self.dict_arg = dict_arg
- self.extra_arg = extra_arg
-
- def __call__(self, call_arg):
- return call_arg + self.int_arg
-
-
-@dataclass
-class TestDataClass:
- x: int
- y: str
-
-
-@unittest.skipIf(OC_VERSION < (2, 1), "omegaconf version too old")
-class TestConstruction(unittest.TestCase):
- def test_basic_construct(self):
- objconf = L(TestClass)(
- int_arg=3,
- list_arg=[10],
- dict_arg={},
- extra_arg=L(TestClass)(int_arg=4, list_arg="${..list_arg}"),
- )
-
- obj = instantiate(objconf)
- self.assertIsInstance(obj, TestClass)
- self.assertEqual(obj.int_arg, 3)
- self.assertEqual(obj.extra_arg.int_arg, 4)
- self.assertEqual(obj.extra_arg.list_arg, obj.list_arg)
-
- objconf.extra_arg.list_arg = [5]
- obj = instantiate(objconf)
- self.assertIsInstance(obj, TestClass)
- self.assertEqual(obj.extra_arg.list_arg, [5])
-
- def test_instantiate_other_obj(self):
- # do nothing for other obj
- self.assertEqual(instantiate(5), 5)
- x = [3, 4, 5]
- self.assertEqual(instantiate(x), x)
- x = TestClass(1)
- self.assertIs(instantiate(x), x)
- x = {"xx": "yy"}
- self.assertIs(instantiate(x), x)
-
- def test_instantiate_lazy_target(self):
- # _target_ is result of instantiate
- objconf = L(L(len)(int_arg=3))(call_arg=4)
- objconf._target_._target_ = TestClass
- self.assertEqual(instantiate(objconf), 7)
-
- def test_instantiate_lst(self):
- lst = [1, 2, L(TestClass)(int_arg=1)]
- x = L(TestClass)(int_arg=lst) # list as an argument should be recursively instantiated
- x = instantiate(x).int_arg
- self.assertEqual(x[:2], [1, 2])
- self.assertIsInstance(x[2], TestClass)
- self.assertEqual(x[2].int_arg, 1)
-
- def test_instantiate_namedtuple(self):
- x = L(TestClass)(int_arg=ShapeSpec(channels=1, width=3))
- # test serialization
- with tempfile.TemporaryDirectory() as d:
- fname = os.path.join(d, "d2_test.yaml")
- OmegaConf.save(x, fname)
- with open(fname) as f:
- x = yaml.unsafe_load(f)
-
- x = instantiate(x)
- self.assertIsInstance(x.int_arg, ShapeSpec)
- self.assertEqual(x.int_arg.channels, 1)
-
- def test_bad_lazycall(self):
- with self.assertRaises(Exception):
- L(3)
-
- def test_instantiate_dataclass(self):
- a = L(TestDataClass)(x=1, y="s")
- a = instantiate(a)
- self.assertEqual(a.x, 1)
- self.assertEqual(a.y, "s")
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/generate_test_ffhq.sh b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/generate_test_ffhq.sh
deleted file mode 100644
index a1b79cb0f3f710eed21a978c3a1489ca830bb7f8..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/paper_runfiles/generate_test_ffhq.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env bash
-
-# paths to data are valid for mml-ws01
-OUT_DIR="/media/inpainting/paper_data/FFHQ_val"
-
-source "$(dirname $0)/env.sh"
-
-for datadir in test
-do
- for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512
- do
- "$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-ffhq \
- location.out_dir=$OUT_DIR cropping.out_square_crop=False
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-done
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/utils/data/dataloader.py b/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/utils/data/dataloader.py
deleted file mode 100644
index 039b9ec3645b2a4626ff47c221e372f32a6ad339..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/utils/data/dataloader.py
+++ /dev/null
@@ -1,425 +0,0 @@
-import torch
-import torch.multiprocessing as multiprocessing
-from torch._C import _set_worker_signal_handlers, \
- _remove_worker_pids, _error_if_any_worker_fails
-try:
- from torch._C import _set_worker_pids
-except:
- from torch._C import _update_worker_pids as _set_worker_pids
-from .sampler import SequentialSampler, RandomSampler, BatchSampler
-import signal
-import collections
-import re
-import sys
-import threading
-import traceback
-from torch._six import string_classes, int_classes
-import numpy as np
-
-if sys.version_info[0] == 2:
- import Queue as queue
-else:
- import queue
-
-
-class ExceptionWrapper(object):
- r"Wraps an exception plus traceback to communicate across threads"
-
- def __init__(self, exc_info):
- self.exc_type = exc_info[0]
- self.exc_msg = "".join(traceback.format_exception(*exc_info))
-
-
-_use_shared_memory = False
-"""Whether to use shared memory in default_collate"""
-
-
-def _worker_loop(dataset, index_queue, data_queue, collate_fn, seed, init_fn, worker_id):
- global _use_shared_memory
- _use_shared_memory = True
-
- # Intialize C side signal handlers for SIGBUS and SIGSEGV. Python signal
- # module's handlers are executed after Python returns from C low-level
- # handlers, likely when the same fatal signal happened again already.
- # https://docs.python.org/3/library/signal.html Sec. 18.8.1.1
- _set_worker_signal_handlers()
-
- torch.set_num_threads(1)
- torch.manual_seed(seed)
- np.random.seed(seed)
-
- if init_fn is not None:
- init_fn(worker_id)
-
- while True:
- r = index_queue.get()
- if r is None:
- break
- idx, batch_indices = r
- try:
- samples = collate_fn([dataset[i] for i in batch_indices])
- except Exception:
- data_queue.put((idx, ExceptionWrapper(sys.exc_info())))
- else:
- data_queue.put((idx, samples))
-
-
-def _worker_manager_loop(in_queue, out_queue, done_event, pin_memory, device_id):
- if pin_memory:
- torch.cuda.set_device(device_id)
-
- while True:
- try:
- r = in_queue.get()
- except Exception:
- if done_event.is_set():
- return
- raise
- if r is None:
- break
- if isinstance(r[1], ExceptionWrapper):
- out_queue.put(r)
- continue
- idx, batch = r
- try:
- if pin_memory:
- batch = pin_memory_batch(batch)
- except Exception:
- out_queue.put((idx, ExceptionWrapper(sys.exc_info())))
- else:
- out_queue.put((idx, batch))
-
-numpy_type_map = {
- 'float64': torch.DoubleTensor,
- 'float32': torch.FloatTensor,
- 'float16': torch.HalfTensor,
- 'int64': torch.LongTensor,
- 'int32': torch.IntTensor,
- 'int16': torch.ShortTensor,
- 'int8': torch.CharTensor,
- 'uint8': torch.ByteTensor,
-}
-
-
-def default_collate(batch):
- "Puts each data field into a tensor with outer dimension batch size"
-
- error_msg = "batch must contain tensors, numbers, dicts or lists; found {}"
- elem_type = type(batch[0])
- if torch.is_tensor(batch[0]):
- out = None
- if _use_shared_memory:
- # If we're in a background process, concatenate directly into a
- # shared memory tensor to avoid an extra copy
- numel = sum([x.numel() for x in batch])
- storage = batch[0].storage()._new_shared(numel)
- out = batch[0].new(storage)
- return torch.stack(batch, 0, out=out)
- elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
- and elem_type.__name__ != 'string_':
- elem = batch[0]
- if elem_type.__name__ == 'ndarray':
- # array of string classes and object
- if re.search('[SaUO]', elem.dtype.str) is not None:
- raise TypeError(error_msg.format(elem.dtype))
-
- return torch.stack([torch.from_numpy(b) for b in batch], 0)
- if elem.shape == (): # scalars
- py_type = float if elem.dtype.name.startswith('float') else int
- return numpy_type_map[elem.dtype.name](list(map(py_type, batch)))
- elif isinstance(batch[0], int_classes):
- return torch.LongTensor(batch)
- elif isinstance(batch[0], float):
- return torch.DoubleTensor(batch)
- elif isinstance(batch[0], string_classes):
- return batch
- elif isinstance(batch[0], collections.Mapping):
- return {key: default_collate([d[key] for d in batch]) for key in batch[0]}
- elif isinstance(batch[0], collections.Sequence):
- transposed = zip(*batch)
- return [default_collate(samples) for samples in transposed]
-
- raise TypeError((error_msg.format(type(batch[0]))))
-
-
-def pin_memory_batch(batch):
- if torch.is_tensor(batch):
- return batch.pin_memory()
- elif isinstance(batch, string_classes):
- return batch
- elif isinstance(batch, collections.Mapping):
- return {k: pin_memory_batch(sample) for k, sample in batch.items()}
- elif isinstance(batch, collections.Sequence):
- return [pin_memory_batch(sample) for sample in batch]
- else:
- return batch
-
-
-_SIGCHLD_handler_set = False
-"""Whether SIGCHLD handler is set for DataLoader worker failures. Only one
-handler needs to be set for all DataLoaders in a process."""
-
-
-def _set_SIGCHLD_handler():
- # Windows doesn't support SIGCHLD handler
- if sys.platform == 'win32':
- return
- # can't set signal in child threads
- if not isinstance(threading.current_thread(), threading._MainThread):
- return
- global _SIGCHLD_handler_set
- if _SIGCHLD_handler_set:
- return
- previous_handler = signal.getsignal(signal.SIGCHLD)
- if not callable(previous_handler):
- previous_handler = None
-
- def handler(signum, frame):
- # This following call uses `waitid` with WNOHANG from C side. Therefore,
- # Python can still get and update the process status successfully.
- _error_if_any_worker_fails()
- if previous_handler is not None:
- previous_handler(signum, frame)
-
- signal.signal(signal.SIGCHLD, handler)
- _SIGCHLD_handler_set = True
-
-
-class DataLoaderIter(object):
- "Iterates once over the DataLoader's dataset, as specified by the sampler"
-
- def __init__(self, loader):
- self.dataset = loader.dataset
- self.collate_fn = loader.collate_fn
- self.batch_sampler = loader.batch_sampler
- self.num_workers = loader.num_workers
- self.pin_memory = loader.pin_memory and torch.cuda.is_available()
- self.timeout = loader.timeout
- self.done_event = threading.Event()
-
- self.sample_iter = iter(self.batch_sampler)
-
- if self.num_workers > 0:
- self.worker_init_fn = loader.worker_init_fn
- self.index_queue = multiprocessing.SimpleQueue()
- self.worker_result_queue = multiprocessing.SimpleQueue()
- self.batches_outstanding = 0
- self.worker_pids_set = False
- self.shutdown = False
- self.send_idx = 0
- self.rcvd_idx = 0
- self.reorder_dict = {}
-
- base_seed = torch.LongTensor(1).random_(0, 2**31-1)[0]
- self.workers = [
- multiprocessing.Process(
- target=_worker_loop,
- args=(self.dataset, self.index_queue, self.worker_result_queue, self.collate_fn,
- base_seed + i, self.worker_init_fn, i))
- for i in range(self.num_workers)]
-
- if self.pin_memory or self.timeout > 0:
- self.data_queue = queue.Queue()
- if self.pin_memory:
- maybe_device_id = torch.cuda.current_device()
- else:
- # do not initialize cuda context if not necessary
- maybe_device_id = None
- self.worker_manager_thread = threading.Thread(
- target=_worker_manager_loop,
- args=(self.worker_result_queue, self.data_queue, self.done_event, self.pin_memory,
- maybe_device_id))
- self.worker_manager_thread.daemon = True
- self.worker_manager_thread.start()
- else:
- self.data_queue = self.worker_result_queue
-
- for w in self.workers:
- w.daemon = True # ensure that the worker exits on process exit
- w.start()
-
- _set_worker_pids(id(self), tuple(w.pid for w in self.workers))
- _set_SIGCHLD_handler()
- self.worker_pids_set = True
-
- # prime the prefetch loop
- for _ in range(2 * self.num_workers):
- self._put_indices()
-
- def __len__(self):
- return len(self.batch_sampler)
-
- def _get_batch(self):
- if self.timeout > 0:
- try:
- return self.data_queue.get(timeout=self.timeout)
- except queue.Empty:
- raise RuntimeError('DataLoader timed out after {} seconds'.format(self.timeout))
- else:
- return self.data_queue.get()
-
- def __next__(self):
- if self.num_workers == 0: # same-process loading
- indices = next(self.sample_iter) # may raise StopIteration
- batch = self.collate_fn([self.dataset[i] for i in indices])
- if self.pin_memory:
- batch = pin_memory_batch(batch)
- return batch
-
- # check if the next sample has already been generated
- if self.rcvd_idx in self.reorder_dict:
- batch = self.reorder_dict.pop(self.rcvd_idx)
- return self._process_next_batch(batch)
-
- if self.batches_outstanding == 0:
- self._shutdown_workers()
- raise StopIteration
-
- while True:
- assert (not self.shutdown and self.batches_outstanding > 0)
- idx, batch = self._get_batch()
- self.batches_outstanding -= 1
- if idx != self.rcvd_idx:
- # store out-of-order samples
- self.reorder_dict[idx] = batch
- continue
- return self._process_next_batch(batch)
-
- next = __next__ # Python 2 compatibility
-
- def __iter__(self):
- return self
-
- def _put_indices(self):
- assert self.batches_outstanding < 2 * self.num_workers
- indices = next(self.sample_iter, None)
- if indices is None:
- return
- self.index_queue.put((self.send_idx, indices))
- self.batches_outstanding += 1
- self.send_idx += 1
-
- def _process_next_batch(self, batch):
- self.rcvd_idx += 1
- self._put_indices()
- if isinstance(batch, ExceptionWrapper):
- raise batch.exc_type(batch.exc_msg)
- return batch
-
- def __getstate__(self):
- # TODO: add limited pickling support for sharing an iterator
- # across multiple threads for HOGWILD.
- # Probably the best way to do this is by moving the sample pushing
- # to a separate thread and then just sharing the data queue
- # but signalling the end is tricky without a non-blocking API
- raise NotImplementedError("DataLoaderIterator cannot be pickled")
-
- def _shutdown_workers(self):
- try:
- if not self.shutdown:
- self.shutdown = True
- self.done_event.set()
- # if worker_manager_thread is waiting to put
- while not self.data_queue.empty():
- self.data_queue.get()
- for _ in self.workers:
- self.index_queue.put(None)
- # done_event should be sufficient to exit worker_manager_thread,
- # but be safe here and put another None
- self.worker_result_queue.put(None)
- finally:
- # removes pids no matter what
- if self.worker_pids_set:
- _remove_worker_pids(id(self))
- self.worker_pids_set = False
-
- def __del__(self):
- if self.num_workers > 0:
- self._shutdown_workers()
-
-
-class DataLoader(object):
- """
- Data loader. Combines a dataset and a sampler, and provides
- single- or multi-process iterators over the dataset.
-
- Arguments:
- dataset (Dataset): dataset from which to load the data.
- batch_size (int, optional): how many samples per batch to load
- (default: 1).
- shuffle (bool, optional): set to ``True`` to have the data reshuffled
- at every epoch (default: False).
- sampler (Sampler, optional): defines the strategy to draw samples from
- the dataset. If specified, ``shuffle`` must be False.
- batch_sampler (Sampler, optional): like sampler, but returns a batch of
- indices at a time. Mutually exclusive with batch_size, shuffle,
- sampler, and drop_last.
- num_workers (int, optional): how many subprocesses to use for data
- loading. 0 means that the data will be loaded in the main process.
- (default: 0)
- collate_fn (callable, optional): merges a list of samples to form a mini-batch.
- pin_memory (bool, optional): If ``True``, the data loader will copy tensors
- into CUDA pinned memory before returning them.
- drop_last (bool, optional): set to ``True`` to drop the last incomplete batch,
- if the dataset size is not divisible by the batch size. If ``False`` and
- the size of dataset is not divisible by the batch size, then the last batch
- will be smaller. (default: False)
- timeout (numeric, optional): if positive, the timeout value for collecting a batch
- from workers. Should always be non-negative. (default: 0)
- worker_init_fn (callable, optional): If not None, this will be called on each
- worker subprocess with the worker id (an int in ``[0, num_workers - 1]``) as
- input, after seeding and before data loading. (default: None)
-
- .. note:: By default, each worker will have its PyTorch seed set to
- ``base_seed + worker_id``, where ``base_seed`` is a long generated
- by main process using its RNG. You may use ``torch.initial_seed()`` to access
- this value in :attr:`worker_init_fn`, which can be used to set other seeds
- (e.g. NumPy) before data loading.
-
- .. warning:: If ``spawn'' start method is used, :attr:`worker_init_fn` cannot be an
- unpicklable object, e.g., a lambda function.
- """
-
- def __init__(self, dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None,
- num_workers=0, collate_fn=default_collate, pin_memory=False, drop_last=False,
- timeout=0, worker_init_fn=None):
- self.dataset = dataset
- self.batch_size = batch_size
- self.num_workers = num_workers
- self.collate_fn = collate_fn
- self.pin_memory = pin_memory
- self.drop_last = drop_last
- self.timeout = timeout
- self.worker_init_fn = worker_init_fn
-
- if timeout < 0:
- raise ValueError('timeout option should be non-negative')
-
- if batch_sampler is not None:
- if batch_size > 1 or shuffle or sampler is not None or drop_last:
- raise ValueError('batch_sampler is mutually exclusive with '
- 'batch_size, shuffle, sampler, and drop_last')
-
- if sampler is not None and shuffle:
- raise ValueError('sampler is mutually exclusive with shuffle')
-
- if self.num_workers < 0:
- raise ValueError('num_workers cannot be negative; '
- 'use num_workers=0 to disable multiprocessing.')
-
- if batch_sampler is None:
- if sampler is None:
- if shuffle:
- sampler = RandomSampler(dataset)
- else:
- sampler = SequentialSampler(dataset)
- batch_sampler = BatchSampler(sampler, batch_size, drop_last)
-
- self.sampler = sampler
- self.batch_sampler = batch_sampler
-
- def __iter__(self):
- return DataLoaderIter(self)
-
- def __len__(self):
- return len(self.batch_sampler)
diff --git a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/base_aligner.py b/spaces/OptimalScale/Robin-33b/lmflow/pipeline/base_aligner.py
deleted file mode 100644
index c2a640a5d7d68b4b7b917d485dde1395e23dc8a3..0000000000000000000000000000000000000000
--- a/spaces/OptimalScale/Robin-33b/lmflow/pipeline/base_aligner.py
+++ /dev/null
@@ -1,21 +0,0 @@
-#!/usr/bin/env python
-# coding=utf-8
-""" BaseTuner: a subclass of BasePipeline.
-"""
-
-from lmflow.pipeline.base_pipeline import BasePipeline
-
-
-class BaseAligner(BasePipeline):
- """ A subclass of BasePipeline which is alignable.
- """
- def __init__(self, *args, **kwargs):
- pass
-
- def _check_if_alignable(self, model, dataset, reward_model):
- # TODO: check if the model is alignable and dataset is compatible
- # TODO: add reward_model
- pass
-
- def align(self, model, dataset, reward_model):
- raise NotImplementedError(".align is not implemented")
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py
deleted file mode 100644
index d22ba52640bebd805b3b8d07025e276dfb023759..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='DMHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- filter_sizes=(1, 3, 5, 7),
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-rfc822.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-rfc822.go
deleted file mode 100644
index b46cfa8837d96d21960b6b9c67a2e7568adc9dcb..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/read-rfc822.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/backend-library.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/backend-library.go
deleted file mode 100644
index fa1ef53a9f5b335a85ec88d11598371ea95ca313..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/backend-library.go and /dev/null differ
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/subprocess.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/subprocess.py
deleted file mode 100644
index cf5bf6be1f6ad8d2be99e55f80cbbd110a8b3d7a..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/subprocess.py
+++ /dev/null
@@ -1,260 +0,0 @@
-import logging
-import os
-import shlex
-import subprocess
-from typing import (
- TYPE_CHECKING,
- Any,
- Callable,
- Iterable,
- List,
- Mapping,
- Optional,
- Union,
-)
-
-from pip._vendor.rich.markup import escape
-
-from pip._internal.cli.spinners import SpinnerInterface, open_spinner
-from pip._internal.exceptions import InstallationSubprocessError
-from pip._internal.utils.logging import VERBOSE, subprocess_logger
-from pip._internal.utils.misc import HiddenText
-
-if TYPE_CHECKING:
- # Literal was introduced in Python 3.8.
- #
- # TODO: Remove `if TYPE_CHECKING` when dropping support for Python 3.7.
- from typing import Literal
-
-CommandArgs = List[Union[str, HiddenText]]
-
-
-def make_command(*args: Union[str, HiddenText, CommandArgs]) -> CommandArgs:
- """
- Create a CommandArgs object.
- """
- command_args: CommandArgs = []
- for arg in args:
- # Check for list instead of CommandArgs since CommandArgs is
- # only known during type-checking.
- if isinstance(arg, list):
- command_args.extend(arg)
- else:
- # Otherwise, arg is str or HiddenText.
- command_args.append(arg)
-
- return command_args
-
-
-def format_command_args(args: Union[List[str], CommandArgs]) -> str:
- """
- Format command arguments for display.
- """
- # For HiddenText arguments, display the redacted form by calling str().
- # Also, we don't apply str() to arguments that aren't HiddenText since
- # this can trigger a UnicodeDecodeError in Python 2 if the argument
- # has type unicode and includes a non-ascii character. (The type
- # checker doesn't ensure the annotations are correct in all cases.)
- return " ".join(
- shlex.quote(str(arg)) if isinstance(arg, HiddenText) else shlex.quote(arg)
- for arg in args
- )
-
-
-def reveal_command_args(args: Union[List[str], CommandArgs]) -> List[str]:
- """
- Return the arguments in their raw, unredacted form.
- """
- return [arg.secret if isinstance(arg, HiddenText) else arg for arg in args]
-
-
-def call_subprocess(
- cmd: Union[List[str], CommandArgs],
- show_stdout: bool = False,
- cwd: Optional[str] = None,
- on_returncode: 'Literal["raise", "warn", "ignore"]' = "raise",
- extra_ok_returncodes: Optional[Iterable[int]] = None,
- extra_environ: Optional[Mapping[str, Any]] = None,
- unset_environ: Optional[Iterable[str]] = None,
- spinner: Optional[SpinnerInterface] = None,
- log_failed_cmd: Optional[bool] = True,
- stdout_only: Optional[bool] = False,
- *,
- command_desc: str,
-) -> str:
- """
- Args:
- show_stdout: if true, use INFO to log the subprocess's stderr and
- stdout streams. Otherwise, use DEBUG. Defaults to False.
- extra_ok_returncodes: an iterable of integer return codes that are
- acceptable, in addition to 0. Defaults to None, which means [].
- unset_environ: an iterable of environment variable names to unset
- prior to calling subprocess.Popen().
- log_failed_cmd: if false, failed commands are not logged, only raised.
- stdout_only: if true, return only stdout, else return both. When true,
- logging of both stdout and stderr occurs when the subprocess has
- terminated, else logging occurs as subprocess output is produced.
- """
- if extra_ok_returncodes is None:
- extra_ok_returncodes = []
- if unset_environ is None:
- unset_environ = []
- # Most places in pip use show_stdout=False. What this means is--
- #
- # - We connect the child's output (combined stderr and stdout) to a
- # single pipe, which we read.
- # - We log this output to stderr at DEBUG level as it is received.
- # - If DEBUG logging isn't enabled (e.g. if --verbose logging wasn't
- # requested), then we show a spinner so the user can still see the
- # subprocess is in progress.
- # - If the subprocess exits with an error, we log the output to stderr
- # at ERROR level if it hasn't already been displayed to the console
- # (e.g. if --verbose logging wasn't enabled). This way we don't log
- # the output to the console twice.
- #
- # If show_stdout=True, then the above is still done, but with DEBUG
- # replaced by INFO.
- if show_stdout:
- # Then log the subprocess output at INFO level.
- log_subprocess: Callable[..., None] = subprocess_logger.info
- used_level = logging.INFO
- else:
- # Then log the subprocess output using VERBOSE. This also ensures
- # it will be logged to the log file (aka user_log), if enabled.
- log_subprocess = subprocess_logger.verbose
- used_level = VERBOSE
-
- # Whether the subprocess will be visible in the console.
- showing_subprocess = subprocess_logger.getEffectiveLevel() <= used_level
-
- # Only use the spinner if we're not showing the subprocess output
- # and we have a spinner.
- use_spinner = not showing_subprocess and spinner is not None
-
- log_subprocess("Running command %s", command_desc)
- env = os.environ.copy()
- if extra_environ:
- env.update(extra_environ)
- for name in unset_environ:
- env.pop(name, None)
- try:
- proc = subprocess.Popen(
- # Convert HiddenText objects to the underlying str.
- reveal_command_args(cmd),
- stdin=subprocess.PIPE,
- stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT if not stdout_only else subprocess.PIPE,
- cwd=cwd,
- env=env,
- errors="backslashreplace",
- )
- except Exception as exc:
- if log_failed_cmd:
- subprocess_logger.critical(
- "Error %s while executing command %s",
- exc,
- command_desc,
- )
- raise
- all_output = []
- if not stdout_only:
- assert proc.stdout
- assert proc.stdin
- proc.stdin.close()
- # In this mode, stdout and stderr are in the same pipe.
- while True:
- line: str = proc.stdout.readline()
- if not line:
- break
- line = line.rstrip()
- all_output.append(line + "\n")
-
- # Show the line immediately.
- log_subprocess(line)
- # Update the spinner.
- if use_spinner:
- assert spinner
- spinner.spin()
- try:
- proc.wait()
- finally:
- if proc.stdout:
- proc.stdout.close()
- output = "".join(all_output)
- else:
- # In this mode, stdout and stderr are in different pipes.
- # We must use communicate() which is the only safe way to read both.
- out, err = proc.communicate()
- # log line by line to preserve pip log indenting
- for out_line in out.splitlines():
- log_subprocess(out_line)
- all_output.append(out)
- for err_line in err.splitlines():
- log_subprocess(err_line)
- all_output.append(err)
- output = out
-
- proc_had_error = proc.returncode and proc.returncode not in extra_ok_returncodes
- if use_spinner:
- assert spinner
- if proc_had_error:
- spinner.finish("error")
- else:
- spinner.finish("done")
- if proc_had_error:
- if on_returncode == "raise":
- error = InstallationSubprocessError(
- command_description=command_desc,
- exit_code=proc.returncode,
- output_lines=all_output if not showing_subprocess else None,
- )
- if log_failed_cmd:
- subprocess_logger.error("[present-rich] %s", error)
- subprocess_logger.verbose(
- "[bold magenta]full command[/]: [blue]%s[/]",
- escape(format_command_args(cmd)),
- extra={"markup": True},
- )
- subprocess_logger.verbose(
- "[bold magenta]cwd[/]: %s",
- escape(cwd or "[inherit]"),
- extra={"markup": True},
- )
-
- raise error
- elif on_returncode == "warn":
- subprocess_logger.warning(
- 'Command "%s" had error code %s in %s',
- command_desc,
- proc.returncode,
- cwd,
- )
- elif on_returncode == "ignore":
- pass
- else:
- raise ValueError(f"Invalid value: on_returncode={on_returncode!r}")
- return output
-
-
-def runner_with_spinner_message(message: str) -> Callable[..., None]:
- """Provide a subprocess_runner that shows a spinner message.
-
- Intended for use with for pep517's Pep517HookCaller. Thus, the runner has
- an API that matches what's expected by Pep517HookCaller.subprocess_runner.
- """
-
- def runner(
- cmd: List[str],
- cwd: Optional[str] = None,
- extra_environ: Optional[Mapping[str, Any]] = None,
- ) -> None:
- with open_spinner(message) as spinner:
- call_subprocess(
- cmd,
- command_desc=message,
- cwd=cwd,
- extra_environ=extra_environ,
- spinner=spinner,
- )
-
- return runner
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/lexers/python.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/lexers/python.py
deleted file mode 100644
index c24e3c86ef2a991227fd87fa447eb433c51c1e0e..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/lexers/python.py
+++ /dev/null
@@ -1,1204 +0,0 @@
-"""
- pygments.lexers.python
- ~~~~~~~~~~~~~~~~~~~~~~
-
- Lexers for Python and related languages.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-import re
-import keyword
-
-from pip._vendor.pygments.lexer import Lexer, RegexLexer, include, bygroups, using, \
- default, words, combined, do_insertions, this
-from pip._vendor.pygments.util import get_bool_opt, shebang_matches
-from pip._vendor.pygments.token import Text, Comment, Operator, Keyword, Name, String, \
- Number, Punctuation, Generic, Other, Error
-from pip._vendor.pygments import unistring as uni
-
-__all__ = ['PythonLexer', 'PythonConsoleLexer', 'PythonTracebackLexer',
- 'Python2Lexer', 'Python2TracebackLexer',
- 'CythonLexer', 'DgLexer', 'NumPyLexer']
-
-line_re = re.compile('.*?\n')
-
-
-class PythonLexer(RegexLexer):
- """
- For Python source code (version 3.x).
-
- .. versionadded:: 0.10
-
- .. versionchanged:: 2.5
- This is now the default ``PythonLexer``. It is still available as the
- alias ``Python3Lexer``.
- """
-
- name = 'Python'
- url = 'http://www.python.org'
- aliases = ['python', 'py', 'sage', 'python3', 'py3']
- filenames = [
- '*.py',
- '*.pyw',
- # Jython
- '*.jy',
- # Sage
- '*.sage',
- # SCons
- '*.sc',
- 'SConstruct',
- 'SConscript',
- # Skylark/Starlark (used by Bazel, Buck, and Pants)
- '*.bzl',
- 'BUCK',
- 'BUILD',
- 'BUILD.bazel',
- 'WORKSPACE',
- # Twisted Application infrastructure
- '*.tac',
- ]
- mimetypes = ['text/x-python', 'application/x-python',
- 'text/x-python3', 'application/x-python3']
-
- uni_name = "[%s][%s]*" % (uni.xid_start, uni.xid_continue)
-
- def innerstring_rules(ttype):
- return [
- # the old style '%s' % (...) string formatting (still valid in Py3)
- (r'%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?'
- '[hlL]?[E-GXc-giorsaux%]', String.Interpol),
- # the new style '{}'.format(...) string formatting
- (r'\{'
- r'((\w+)((\.\w+)|(\[[^\]]+\]))*)?' # field name
- r'(\![sra])?' # conversion
- r'(\:(.?[<>=\^])?[-+ ]?#?0?(\d+)?,?(\.\d+)?[E-GXb-gnosx%]?)?'
- r'\}', String.Interpol),
-
- # backslashes, quotes and formatting signs must be parsed one at a time
- (r'[^\\\'"%{\n]+', ttype),
- (r'[\'"\\]', ttype),
- # unhandled string formatting sign
- (r'%|(\{{1,2})', ttype)
- # newlines are an error (use "nl" state)
- ]
-
- def fstring_rules(ttype):
- return [
- # Assuming that a '}' is the closing brace after format specifier.
- # Sadly, this means that we won't detect syntax error. But it's
- # more important to parse correct syntax correctly, than to
- # highlight invalid syntax.
- (r'\}', String.Interpol),
- (r'\{', String.Interpol, 'expr-inside-fstring'),
- # backslashes, quotes and formatting signs must be parsed one at a time
- (r'[^\\\'"{}\n]+', ttype),
- (r'[\'"\\]', ttype),
- # newlines are an error (use "nl" state)
- ]
-
- tokens = {
- 'root': [
- (r'\n', Text),
- (r'^(\s*)([rRuUbB]{,2})("""(?:.|\n)*?""")',
- bygroups(Text, String.Affix, String.Doc)),
- (r"^(\s*)([rRuUbB]{,2})('''(?:.|\n)*?''')",
- bygroups(Text, String.Affix, String.Doc)),
- (r'\A#!.+$', Comment.Hashbang),
- (r'#.*$', Comment.Single),
- (r'\\\n', Text),
- (r'\\', Text),
- include('keywords'),
- include('soft-keywords'),
- (r'(def)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'funcname'),
- (r'(class)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'classname'),
- (r'(from)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text),
- 'fromimport'),
- (r'(import)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text),
- 'import'),
- include('expr'),
- ],
- 'expr': [
- # raw f-strings
- ('(?i)(rf|fr)(""")',
- bygroups(String.Affix, String.Double),
- combined('rfstringescape', 'tdqf')),
- ("(?i)(rf|fr)(''')",
- bygroups(String.Affix, String.Single),
- combined('rfstringescape', 'tsqf')),
- ('(?i)(rf|fr)(")',
- bygroups(String.Affix, String.Double),
- combined('rfstringescape', 'dqf')),
- ("(?i)(rf|fr)(')",
- bygroups(String.Affix, String.Single),
- combined('rfstringescape', 'sqf')),
- # non-raw f-strings
- ('([fF])(""")', bygroups(String.Affix, String.Double),
- combined('fstringescape', 'tdqf')),
- ("([fF])(''')", bygroups(String.Affix, String.Single),
- combined('fstringescape', 'tsqf')),
- ('([fF])(")', bygroups(String.Affix, String.Double),
- combined('fstringescape', 'dqf')),
- ("([fF])(')", bygroups(String.Affix, String.Single),
- combined('fstringescape', 'sqf')),
- # raw bytes and strings
- ('(?i)(rb|br|r)(""")',
- bygroups(String.Affix, String.Double), 'tdqs'),
- ("(?i)(rb|br|r)(''')",
- bygroups(String.Affix, String.Single), 'tsqs'),
- ('(?i)(rb|br|r)(")',
- bygroups(String.Affix, String.Double), 'dqs'),
- ("(?i)(rb|br|r)(')",
- bygroups(String.Affix, String.Single), 'sqs'),
- # non-raw strings
- ('([uU]?)(""")', bygroups(String.Affix, String.Double),
- combined('stringescape', 'tdqs')),
- ("([uU]?)(''')", bygroups(String.Affix, String.Single),
- combined('stringescape', 'tsqs')),
- ('([uU]?)(")', bygroups(String.Affix, String.Double),
- combined('stringescape', 'dqs')),
- ("([uU]?)(')", bygroups(String.Affix, String.Single),
- combined('stringescape', 'sqs')),
- # non-raw bytes
- ('([bB])(""")', bygroups(String.Affix, String.Double),
- combined('bytesescape', 'tdqs')),
- ("([bB])(''')", bygroups(String.Affix, String.Single),
- combined('bytesescape', 'tsqs')),
- ('([bB])(")', bygroups(String.Affix, String.Double),
- combined('bytesescape', 'dqs')),
- ("([bB])(')", bygroups(String.Affix, String.Single),
- combined('bytesescape', 'sqs')),
-
- (r'[^\S\n]+', Text),
- include('numbers'),
- (r'!=|==|<<|>>|:=|[-~+/*%=<>&^|.]', Operator),
- (r'[]{}:(),;[]', Punctuation),
- (r'(in|is|and|or|not)\b', Operator.Word),
- include('expr-keywords'),
- include('builtins'),
- include('magicfuncs'),
- include('magicvars'),
- include('name'),
- ],
- 'expr-inside-fstring': [
- (r'[{([]', Punctuation, 'expr-inside-fstring-inner'),
- # without format specifier
- (r'(=\s*)?' # debug (https://bugs.python.org/issue36817)
- r'(\![sraf])?' # conversion
- r'\}', String.Interpol, '#pop'),
- # with format specifier
- # we'll catch the remaining '}' in the outer scope
- (r'(=\s*)?' # debug (https://bugs.python.org/issue36817)
- r'(\![sraf])?' # conversion
- r':', String.Interpol, '#pop'),
- (r'\s+', Text), # allow new lines
- include('expr'),
- ],
- 'expr-inside-fstring-inner': [
- (r'[{([]', Punctuation, 'expr-inside-fstring-inner'),
- (r'[])}]', Punctuation, '#pop'),
- (r'\s+', Text), # allow new lines
- include('expr'),
- ],
- 'expr-keywords': [
- # Based on https://docs.python.org/3/reference/expressions.html
- (words((
- 'async for', 'await', 'else', 'for', 'if', 'lambda',
- 'yield', 'yield from'), suffix=r'\b'),
- Keyword),
- (words(('True', 'False', 'None'), suffix=r'\b'), Keyword.Constant),
- ],
- 'keywords': [
- (words((
- 'assert', 'async', 'await', 'break', 'continue', 'del', 'elif',
- 'else', 'except', 'finally', 'for', 'global', 'if', 'lambda',
- 'pass', 'raise', 'nonlocal', 'return', 'try', 'while', 'yield',
- 'yield from', 'as', 'with'), suffix=r'\b'),
- Keyword),
- (words(('True', 'False', 'None'), suffix=r'\b'), Keyword.Constant),
- ],
- 'soft-keywords': [
- # `match`, `case` and `_` soft keywords
- (r'(^[ \t]*)' # at beginning of line + possible indentation
- r'(match|case)\b' # a possible keyword
- r'(?![ \t]*(?:' # not followed by...
- r'[:,;=^&|@~)\]}]|(?:' + # characters and keywords that mean this isn't
- r'|'.join(keyword.kwlist) + r')\b))', # pattern matching
- bygroups(Text, Keyword), 'soft-keywords-inner'),
- ],
- 'soft-keywords-inner': [
- # optional `_` keyword
- (r'(\s+)([^\n_]*)(_\b)', bygroups(Text, using(this), Keyword)),
- default('#pop')
- ],
- 'builtins': [
- (words((
- '__import__', 'abs', 'all', 'any', 'bin', 'bool', 'bytearray',
- 'breakpoint', 'bytes', 'chr', 'classmethod', 'compile', 'complex',
- 'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'filter',
- 'float', 'format', 'frozenset', 'getattr', 'globals', 'hasattr',
- 'hash', 'hex', 'id', 'input', 'int', 'isinstance', 'issubclass',
- 'iter', 'len', 'list', 'locals', 'map', 'max', 'memoryview',
- 'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'print',
- 'property', 'range', 'repr', 'reversed', 'round', 'set', 'setattr',
- 'slice', 'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple',
- 'type', 'vars', 'zip'), prefix=r'(?>|[-~+/*%=<>&^|.]', Operator),
- include('keywords'),
- (r'(def)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'funcname'),
- (r'(class)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'classname'),
- (r'(from)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text),
- 'fromimport'),
- (r'(import)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text),
- 'import'),
- include('builtins'),
- include('magicfuncs'),
- include('magicvars'),
- include('backtick'),
- ('([rR]|[uUbB][rR]|[rR][uUbB])(""")',
- bygroups(String.Affix, String.Double), 'tdqs'),
- ("([rR]|[uUbB][rR]|[rR][uUbB])(''')",
- bygroups(String.Affix, String.Single), 'tsqs'),
- ('([rR]|[uUbB][rR]|[rR][uUbB])(")',
- bygroups(String.Affix, String.Double), 'dqs'),
- ("([rR]|[uUbB][rR]|[rR][uUbB])(')",
- bygroups(String.Affix, String.Single), 'sqs'),
- ('([uUbB]?)(""")', bygroups(String.Affix, String.Double),
- combined('stringescape', 'tdqs')),
- ("([uUbB]?)(''')", bygroups(String.Affix, String.Single),
- combined('stringescape', 'tsqs')),
- ('([uUbB]?)(")', bygroups(String.Affix, String.Double),
- combined('stringescape', 'dqs')),
- ("([uUbB]?)(')", bygroups(String.Affix, String.Single),
- combined('stringescape', 'sqs')),
- include('name'),
- include('numbers'),
- ],
- 'keywords': [
- (words((
- 'assert', 'break', 'continue', 'del', 'elif', 'else', 'except',
- 'exec', 'finally', 'for', 'global', 'if', 'lambda', 'pass',
- 'print', 'raise', 'return', 'try', 'while', 'yield',
- 'yield from', 'as', 'with'), suffix=r'\b'),
- Keyword),
- ],
- 'builtins': [
- (words((
- '__import__', 'abs', 'all', 'any', 'apply', 'basestring', 'bin',
- 'bool', 'buffer', 'bytearray', 'bytes', 'callable', 'chr', 'classmethod',
- 'cmp', 'coerce', 'compile', 'complex', 'delattr', 'dict', 'dir', 'divmod',
- 'enumerate', 'eval', 'execfile', 'exit', 'file', 'filter', 'float',
- 'frozenset', 'getattr', 'globals', 'hasattr', 'hash', 'hex', 'id',
- 'input', 'int', 'intern', 'isinstance', 'issubclass', 'iter', 'len',
- 'list', 'locals', 'long', 'map', 'max', 'min', 'next', 'object',
- 'oct', 'open', 'ord', 'pow', 'property', 'range', 'raw_input', 'reduce',
- 'reload', 'repr', 'reversed', 'round', 'set', 'setattr', 'slice',
- 'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type',
- 'unichr', 'unicode', 'vars', 'xrange', 'zip'),
- prefix=r'(?>> a = 'foo'
- >>> print a
- foo
- >>> 1 / 0
- Traceback (most recent call last):
- File "", line 1, in
- ZeroDivisionError: integer division or modulo by zero
-
- Additional options:
-
- `python3`
- Use Python 3 lexer for code. Default is ``True``.
-
- .. versionadded:: 1.0
- .. versionchanged:: 2.5
- Now defaults to ``True``.
- """
- name = 'Python console session'
- aliases = ['pycon']
- mimetypes = ['text/x-python-doctest']
-
- def __init__(self, **options):
- self.python3 = get_bool_opt(options, 'python3', True)
- Lexer.__init__(self, **options)
-
- def get_tokens_unprocessed(self, text):
- if self.python3:
- pylexer = PythonLexer(**self.options)
- tblexer = PythonTracebackLexer(**self.options)
- else:
- pylexer = Python2Lexer(**self.options)
- tblexer = Python2TracebackLexer(**self.options)
-
- curcode = ''
- insertions = []
- curtb = ''
- tbindex = 0
- tb = 0
- for match in line_re.finditer(text):
- line = match.group()
- if line.startswith('>>> ') or line.startswith('... '):
- tb = 0
- insertions.append((len(curcode),
- [(0, Generic.Prompt, line[:4])]))
- curcode += line[4:]
- elif line.rstrip() == '...' and not tb:
- # only a new >>> prompt can end an exception block
- # otherwise an ellipsis in place of the traceback frames
- # will be mishandled
- insertions.append((len(curcode),
- [(0, Generic.Prompt, '...')]))
- curcode += line[3:]
- else:
- if curcode:
- yield from do_insertions(
- insertions, pylexer.get_tokens_unprocessed(curcode))
- curcode = ''
- insertions = []
- if (line.startswith('Traceback (most recent call last):') or
- re.match(' File "[^"]+", line \\d+\\n$', line)):
- tb = 1
- curtb = line
- tbindex = match.start()
- elif line == 'KeyboardInterrupt\n':
- yield match.start(), Name.Class, line
- elif tb:
- curtb += line
- if not (line.startswith(' ') or line.strip() == '...'):
- tb = 0
- for i, t, v in tblexer.get_tokens_unprocessed(curtb):
- yield tbindex+i, t, v
- curtb = ''
- else:
- yield match.start(), Generic.Output, line
- if curcode:
- yield from do_insertions(insertions,
- pylexer.get_tokens_unprocessed(curcode))
- if curtb:
- for i, t, v in tblexer.get_tokens_unprocessed(curtb):
- yield tbindex+i, t, v
-
-
-class PythonTracebackLexer(RegexLexer):
- """
- For Python 3.x tracebacks, with support for chained exceptions.
-
- .. versionadded:: 1.0
-
- .. versionchanged:: 2.5
- This is now the default ``PythonTracebackLexer``. It is still available
- as the alias ``Python3TracebackLexer``.
- """
-
- name = 'Python Traceback'
- aliases = ['pytb', 'py3tb']
- filenames = ['*.pytb', '*.py3tb']
- mimetypes = ['text/x-python-traceback', 'text/x-python3-traceback']
-
- tokens = {
- 'root': [
- (r'\n', Text),
- (r'^Traceback \(most recent call last\):\n', Generic.Traceback, 'intb'),
- (r'^During handling of the above exception, another '
- r'exception occurred:\n\n', Generic.Traceback),
- (r'^The above exception was the direct cause of the '
- r'following exception:\n\n', Generic.Traceback),
- (r'^(?= File "[^"]+", line \d+)', Generic.Traceback, 'intb'),
- (r'^.*\n', Other),
- ],
- 'intb': [
- (r'^( File )("[^"]+")(, line )(\d+)(, in )(.+)(\n)',
- bygroups(Text, Name.Builtin, Text, Number, Text, Name, Text)),
- (r'^( File )("[^"]+")(, line )(\d+)(\n)',
- bygroups(Text, Name.Builtin, Text, Number, Text)),
- (r'^( )(.+)(\n)',
- bygroups(Text, using(PythonLexer), Text), 'markers'),
- (r'^([ \t]*)(\.\.\.)(\n)',
- bygroups(Text, Comment, Text)), # for doctests...
- (r'^([^:]+)(: )(.+)(\n)',
- bygroups(Generic.Error, Text, Name, Text), '#pop'),
- (r'^([a-zA-Z_][\w.]*)(:?\n)',
- bygroups(Generic.Error, Text), '#pop')
- ],
- 'markers': [
- # Either `PEP 657 `
- # error locations in Python 3.11+, or single-caret markers
- # for syntax errors before that.
- (r'^( {4,})([~^]+)(\n)',
- bygroups(Text, Punctuation.Marker, Text),
- '#pop'),
- default('#pop'),
- ],
- }
-
-
-Python3TracebackLexer = PythonTracebackLexer
-
-
-class Python2TracebackLexer(RegexLexer):
- """
- For Python tracebacks.
-
- .. versionadded:: 0.7
-
- .. versionchanged:: 2.5
- This class has been renamed from ``PythonTracebackLexer``.
- ``PythonTracebackLexer`` now refers to the Python 3 variant.
- """
-
- name = 'Python 2.x Traceback'
- aliases = ['py2tb']
- filenames = ['*.py2tb']
- mimetypes = ['text/x-python2-traceback']
-
- tokens = {
- 'root': [
- # Cover both (most recent call last) and (innermost last)
- # The optional ^C allows us to catch keyboard interrupt signals.
- (r'^(\^C)?(Traceback.*\n)',
- bygroups(Text, Generic.Traceback), 'intb'),
- # SyntaxError starts with this.
- (r'^(?= File "[^"]+", line \d+)', Generic.Traceback, 'intb'),
- (r'^.*\n', Other),
- ],
- 'intb': [
- (r'^( File )("[^"]+")(, line )(\d+)(, in )(.+)(\n)',
- bygroups(Text, Name.Builtin, Text, Number, Text, Name, Text)),
- (r'^( File )("[^"]+")(, line )(\d+)(\n)',
- bygroups(Text, Name.Builtin, Text, Number, Text)),
- (r'^( )(.+)(\n)',
- bygroups(Text, using(Python2Lexer), Text), 'marker'),
- (r'^([ \t]*)(\.\.\.)(\n)',
- bygroups(Text, Comment, Text)), # for doctests...
- (r'^([^:]+)(: )(.+)(\n)',
- bygroups(Generic.Error, Text, Name, Text), '#pop'),
- (r'^([a-zA-Z_]\w*)(:?\n)',
- bygroups(Generic.Error, Text), '#pop')
- ],
- 'marker': [
- # For syntax errors.
- (r'( {4,})(\^)', bygroups(Text, Punctuation.Marker), '#pop'),
- default('#pop'),
- ],
- }
-
-
-class CythonLexer(RegexLexer):
- """
- For Pyrex and Cython source code.
-
- .. versionadded:: 1.1
- """
-
- name = 'Cython'
- url = 'http://cython.org'
- aliases = ['cython', 'pyx', 'pyrex']
- filenames = ['*.pyx', '*.pxd', '*.pxi']
- mimetypes = ['text/x-cython', 'application/x-cython']
-
- tokens = {
- 'root': [
- (r'\n', Text),
- (r'^(\s*)("""(?:.|\n)*?""")', bygroups(Text, String.Doc)),
- (r"^(\s*)('''(?:.|\n)*?''')", bygroups(Text, String.Doc)),
- (r'[^\S\n]+', Text),
- (r'#.*$', Comment),
- (r'[]{}:(),;[]', Punctuation),
- (r'\\\n', Text),
- (r'\\', Text),
- (r'(in|is|and|or|not)\b', Operator.Word),
- (r'(<)([a-zA-Z0-9.?]+)(>)',
- bygroups(Punctuation, Keyword.Type, Punctuation)),
- (r'!=|==|<<|>>|[-~+/*%=<>&^|.?]', Operator),
- (r'(from)(\d+)(<=)(\s+)(<)(\d+)(:)',
- bygroups(Keyword, Number.Integer, Operator, Name, Operator,
- Name, Punctuation)),
- include('keywords'),
- (r'(def|property)(\s+)', bygroups(Keyword, Text), 'funcname'),
- (r'(cp?def)(\s+)', bygroups(Keyword, Text), 'cdef'),
- # (should actually start a block with only cdefs)
- (r'(cdef)(:)', bygroups(Keyword, Punctuation)),
- (r'(class|struct)(\s+)', bygroups(Keyword, Text), 'classname'),
- (r'(from)(\s+)', bygroups(Keyword, Text), 'fromimport'),
- (r'(c?import)(\s+)', bygroups(Keyword, Text), 'import'),
- include('builtins'),
- include('backtick'),
- ('(?:[rR]|[uU][rR]|[rR][uU])"""', String, 'tdqs'),
- ("(?:[rR]|[uU][rR]|[rR][uU])'''", String, 'tsqs'),
- ('(?:[rR]|[uU][rR]|[rR][uU])"', String, 'dqs'),
- ("(?:[rR]|[uU][rR]|[rR][uU])'", String, 'sqs'),
- ('[uU]?"""', String, combined('stringescape', 'tdqs')),
- ("[uU]?'''", String, combined('stringescape', 'tsqs')),
- ('[uU]?"', String, combined('stringescape', 'dqs')),
- ("[uU]?'", String, combined('stringescape', 'sqs')),
- include('name'),
- include('numbers'),
- ],
- 'keywords': [
- (words((
- 'assert', 'async', 'await', 'break', 'by', 'continue', 'ctypedef', 'del', 'elif',
- 'else', 'except', 'except?', 'exec', 'finally', 'for', 'fused', 'gil',
- 'global', 'if', 'include', 'lambda', 'nogil', 'pass', 'print',
- 'raise', 'return', 'try', 'while', 'yield', 'as', 'with'), suffix=r'\b'),
- Keyword),
- (r'(DEF|IF|ELIF|ELSE)\b', Comment.Preproc),
- ],
- 'builtins': [
- (words((
- '__import__', 'abs', 'all', 'any', 'apply', 'basestring', 'bin', 'bint',
- 'bool', 'buffer', 'bytearray', 'bytes', 'callable', 'chr',
- 'classmethod', 'cmp', 'coerce', 'compile', 'complex', 'delattr',
- 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'execfile', 'exit',
- 'file', 'filter', 'float', 'frozenset', 'getattr', 'globals',
- 'hasattr', 'hash', 'hex', 'id', 'input', 'int', 'intern', 'isinstance',
- 'issubclass', 'iter', 'len', 'list', 'locals', 'long', 'map', 'max',
- 'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'property', 'Py_ssize_t',
- 'range', 'raw_input', 'reduce', 'reload', 'repr', 'reversed',
- 'round', 'set', 'setattr', 'slice', 'sorted', 'staticmethod',
- 'str', 'sum', 'super', 'tuple', 'type', 'unichr', 'unicode', 'unsigned',
- 'vars', 'xrange', 'zip'), prefix=r'(? None:
- """After call strategy that does nothing."""
-
-
-def after_log(
- logger: "logging.Logger",
- log_level: int,
- sec_format: str = "%0.3f",
-) -> typing.Callable[["RetryCallState"], None]:
- """After call strategy that logs to some logger the finished attempt."""
-
- def log_it(retry_state: "RetryCallState") -> None:
- logger.log(
- log_level,
- f"Finished call to '{_utils.get_callback_name(retry_state.fn)}' "
- f"after {sec_format % retry_state.seconds_since_start}(s), "
- f"this was the {_utils.to_ordinal(retry_state.attempt_number)} time calling it.",
- )
-
- return log_it
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/zipp.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/zipp.py
deleted file mode 100644
index 26b723c1fd3e25740e0268b8c9b50905c58c3d4a..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/_vendor/zipp.py
+++ /dev/null
@@ -1,329 +0,0 @@
-import io
-import posixpath
-import zipfile
-import itertools
-import contextlib
-import sys
-import pathlib
-
-if sys.version_info < (3, 7):
- from collections import OrderedDict
-else:
- OrderedDict = dict
-
-
-__all__ = ['Path']
-
-
-def _parents(path):
- """
- Given a path with elements separated by
- posixpath.sep, generate all parents of that path.
-
- >>> list(_parents('b/d'))
- ['b']
- >>> list(_parents('/b/d/'))
- ['/b']
- >>> list(_parents('b/d/f/'))
- ['b/d', 'b']
- >>> list(_parents('b'))
- []
- >>> list(_parents(''))
- []
- """
- return itertools.islice(_ancestry(path), 1, None)
-
-
-def _ancestry(path):
- """
- Given a path with elements separated by
- posixpath.sep, generate all elements of that path
-
- >>> list(_ancestry('b/d'))
- ['b/d', 'b']
- >>> list(_ancestry('/b/d/'))
- ['/b/d', '/b']
- >>> list(_ancestry('b/d/f/'))
- ['b/d/f', 'b/d', 'b']
- >>> list(_ancestry('b'))
- ['b']
- >>> list(_ancestry(''))
- []
- """
- path = path.rstrip(posixpath.sep)
- while path and path != posixpath.sep:
- yield path
- path, tail = posixpath.split(path)
-
-
-_dedupe = OrderedDict.fromkeys
-"""Deduplicate an iterable in original order"""
-
-
-def _difference(minuend, subtrahend):
- """
- Return items in minuend not in subtrahend, retaining order
- with O(1) lookup.
- """
- return itertools.filterfalse(set(subtrahend).__contains__, minuend)
-
-
-class CompleteDirs(zipfile.ZipFile):
- """
- A ZipFile subclass that ensures that implied directories
- are always included in the namelist.
- """
-
- @staticmethod
- def _implied_dirs(names):
- parents = itertools.chain.from_iterable(map(_parents, names))
- as_dirs = (p + posixpath.sep for p in parents)
- return _dedupe(_difference(as_dirs, names))
-
- def namelist(self):
- names = super(CompleteDirs, self).namelist()
- return names + list(self._implied_dirs(names))
-
- def _name_set(self):
- return set(self.namelist())
-
- def resolve_dir(self, name):
- """
- If the name represents a directory, return that name
- as a directory (with the trailing slash).
- """
- names = self._name_set()
- dirname = name + '/'
- dir_match = name not in names and dirname in names
- return dirname if dir_match else name
-
- @classmethod
- def make(cls, source):
- """
- Given a source (filename or zipfile), return an
- appropriate CompleteDirs subclass.
- """
- if isinstance(source, CompleteDirs):
- return source
-
- if not isinstance(source, zipfile.ZipFile):
- return cls(_pathlib_compat(source))
-
- # Only allow for FastLookup when supplied zipfile is read-only
- if 'r' not in source.mode:
- cls = CompleteDirs
-
- source.__class__ = cls
- return source
-
-
-class FastLookup(CompleteDirs):
- """
- ZipFile subclass to ensure implicit
- dirs exist and are resolved rapidly.
- """
-
- def namelist(self):
- with contextlib.suppress(AttributeError):
- return self.__names
- self.__names = super(FastLookup, self).namelist()
- return self.__names
-
- def _name_set(self):
- with contextlib.suppress(AttributeError):
- return self.__lookup
- self.__lookup = super(FastLookup, self)._name_set()
- return self.__lookup
-
-
-def _pathlib_compat(path):
- """
- For path-like objects, convert to a filename for compatibility
- on Python 3.6.1 and earlier.
- """
- try:
- return path.__fspath__()
- except AttributeError:
- return str(path)
-
-
-class Path:
- """
- A pathlib-compatible interface for zip files.
-
- Consider a zip file with this structure::
-
- .
- ├── a.txt
- └── b
- ├── c.txt
- └── d
- └── e.txt
-
- >>> data = io.BytesIO()
- >>> zf = zipfile.ZipFile(data, 'w')
- >>> zf.writestr('a.txt', 'content of a')
- >>> zf.writestr('b/c.txt', 'content of c')
- >>> zf.writestr('b/d/e.txt', 'content of e')
- >>> zf.filename = 'mem/abcde.zip'
-
- Path accepts the zipfile object itself or a filename
-
- >>> root = Path(zf)
-
- From there, several path operations are available.
-
- Directory iteration (including the zip file itself):
-
- >>> a, b = root.iterdir()
- >>> a
- Path('mem/abcde.zip', 'a.txt')
- >>> b
- Path('mem/abcde.zip', 'b/')
-
- name property:
-
- >>> b.name
- 'b'
-
- join with divide operator:
-
- >>> c = b / 'c.txt'
- >>> c
- Path('mem/abcde.zip', 'b/c.txt')
- >>> c.name
- 'c.txt'
-
- Read text:
-
- >>> c.read_text()
- 'content of c'
-
- existence:
-
- >>> c.exists()
- True
- >>> (b / 'missing.txt').exists()
- False
-
- Coercion to string:
-
- >>> import os
- >>> str(c).replace(os.sep, posixpath.sep)
- 'mem/abcde.zip/b/c.txt'
-
- At the root, ``name``, ``filename``, and ``parent``
- resolve to the zipfile. Note these attributes are not
- valid and will raise a ``ValueError`` if the zipfile
- has no filename.
-
- >>> root.name
- 'abcde.zip'
- >>> str(root.filename).replace(os.sep, posixpath.sep)
- 'mem/abcde.zip'
- >>> str(root.parent)
- 'mem'
- """
-
- __repr = "{self.__class__.__name__}({self.root.filename!r}, {self.at!r})"
-
- def __init__(self, root, at=""):
- """
- Construct a Path from a ZipFile or filename.
-
- Note: When the source is an existing ZipFile object,
- its type (__class__) will be mutated to a
- specialized type. If the caller wishes to retain the
- original type, the caller should either create a
- separate ZipFile object or pass a filename.
- """
- self.root = FastLookup.make(root)
- self.at = at
-
- def open(self, mode='r', *args, pwd=None, **kwargs):
- """
- Open this entry as text or binary following the semantics
- of ``pathlib.Path.open()`` by passing arguments through
- to io.TextIOWrapper().
- """
- if self.is_dir():
- raise IsADirectoryError(self)
- zip_mode = mode[0]
- if not self.exists() and zip_mode == 'r':
- raise FileNotFoundError(self)
- stream = self.root.open(self.at, zip_mode, pwd=pwd)
- if 'b' in mode:
- if args or kwargs:
- raise ValueError("encoding args invalid for binary operation")
- return stream
- return io.TextIOWrapper(stream, *args, **kwargs)
-
- @property
- def name(self):
- return pathlib.Path(self.at).name or self.filename.name
-
- @property
- def suffix(self):
- return pathlib.Path(self.at).suffix or self.filename.suffix
-
- @property
- def suffixes(self):
- return pathlib.Path(self.at).suffixes or self.filename.suffixes
-
- @property
- def stem(self):
- return pathlib.Path(self.at).stem or self.filename.stem
-
- @property
- def filename(self):
- return pathlib.Path(self.root.filename).joinpath(self.at)
-
- def read_text(self, *args, **kwargs):
- with self.open('r', *args, **kwargs) as strm:
- return strm.read()
-
- def read_bytes(self):
- with self.open('rb') as strm:
- return strm.read()
-
- def _is_child(self, path):
- return posixpath.dirname(path.at.rstrip("/")) == self.at.rstrip("/")
-
- def _next(self, at):
- return self.__class__(self.root, at)
-
- def is_dir(self):
- return not self.at or self.at.endswith("/")
-
- def is_file(self):
- return self.exists() and not self.is_dir()
-
- def exists(self):
- return self.at in self.root._name_set()
-
- def iterdir(self):
- if not self.is_dir():
- raise ValueError("Can't listdir a file")
- subs = map(self._next, self.root.namelist())
- return filter(self._is_child, subs)
-
- def __str__(self):
- return posixpath.join(self.root.filename, self.at)
-
- def __repr__(self):
- return self.__repr.format(self=self)
-
- def joinpath(self, *other):
- next = posixpath.join(self.at, *map(_pathlib_compat, other))
- return self._next(self.root.resolve_dir(next))
-
- __truediv__ = joinpath
-
- @property
- def parent(self):
- if not self.at:
- return self.filename.parent
- parent_at = posixpath.dirname(self.at.rstrip('/'))
- if parent_at:
- parent_at += '/'
- return self._next(parent_at)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/resnet.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/resnet.py
deleted file mode 100644
index 1cb3ac057ee2d52c46fc94685b5d4e698aad8d5f..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/resnet.py
+++ /dev/null
@@ -1,316 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import logging
-
-import torch.nn as nn
-import torch.utils.checkpoint as cp
-
-from .utils import constant_init, kaiming_init
-
-
-def conv3x3(in_planes, out_planes, stride=1, dilation=1):
- """3x3 convolution with padding."""
- return nn.Conv2d(
- in_planes,
- out_planes,
- kernel_size=3,
- stride=stride,
- padding=dilation,
- dilation=dilation,
- bias=False)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self,
- inplanes,
- planes,
- stride=1,
- dilation=1,
- downsample=None,
- style='pytorch',
- with_cp=False):
- super(BasicBlock, self).__init__()
- assert style in ['pytorch', 'caffe']
- self.conv1 = conv3x3(inplanes, planes, stride, dilation)
- self.bn1 = nn.BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- self.bn2 = nn.BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
- self.dilation = dilation
- assert not with_cp
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self,
- inplanes,
- planes,
- stride=1,
- dilation=1,
- downsample=None,
- style='pytorch',
- with_cp=False):
- """Bottleneck block.
-
- If style is "pytorch", the stride-two layer is the 3x3 conv layer, if
- it is "caffe", the stride-two layer is the first 1x1 conv layer.
- """
- super(Bottleneck, self).__init__()
- assert style in ['pytorch', 'caffe']
- if style == 'pytorch':
- conv1_stride = 1
- conv2_stride = stride
- else:
- conv1_stride = stride
- conv2_stride = 1
- self.conv1 = nn.Conv2d(
- inplanes, planes, kernel_size=1, stride=conv1_stride, bias=False)
- self.conv2 = nn.Conv2d(
- planes,
- planes,
- kernel_size=3,
- stride=conv2_stride,
- padding=dilation,
- dilation=dilation,
- bias=False)
-
- self.bn1 = nn.BatchNorm2d(planes)
- self.bn2 = nn.BatchNorm2d(planes)
- self.conv3 = nn.Conv2d(
- planes, planes * self.expansion, kernel_size=1, bias=False)
- self.bn3 = nn.BatchNorm2d(planes * self.expansion)
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
- self.dilation = dilation
- self.with_cp = with_cp
-
- def forward(self, x):
-
- def _inner_forward(x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
- out = self.relu(out)
-
- out = self.conv3(out)
- out = self.bn3(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
-
- return out
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(_inner_forward, x)
- else:
- out = _inner_forward(x)
-
- out = self.relu(out)
-
- return out
-
-
-def make_res_layer(block,
- inplanes,
- planes,
- blocks,
- stride=1,
- dilation=1,
- style='pytorch',
- with_cp=False):
- downsample = None
- if stride != 1 or inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(
- inplanes,
- planes * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False),
- nn.BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(
- block(
- inplanes,
- planes,
- stride,
- dilation,
- downsample,
- style=style,
- with_cp=with_cp))
- inplanes = planes * block.expansion
- for _ in range(1, blocks):
- layers.append(
- block(inplanes, planes, 1, dilation, style=style, with_cp=with_cp))
-
- return nn.Sequential(*layers)
-
-
-class ResNet(nn.Module):
- """ResNet backbone.
-
- Args:
- depth (int): Depth of resnet, from {18, 34, 50, 101, 152}.
- num_stages (int): Resnet stages, normally 4.
- strides (Sequence[int]): Strides of the first block of each stage.
- dilations (Sequence[int]): Dilation of each stage.
- out_indices (Sequence[int]): Output from which stages.
- style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two
- layer is the 3x3 conv layer, otherwise the stride-two layer is
- the first 1x1 conv layer.
- frozen_stages (int): Stages to be frozen (all param fixed). -1 means
- not freezing any parameters.
- bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze
- running stats (mean and var).
- bn_frozen (bool): Whether to freeze weight and bias of BN layers.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed.
- """
-
- arch_settings = {
- 18: (BasicBlock, (2, 2, 2, 2)),
- 34: (BasicBlock, (3, 4, 6, 3)),
- 50: (Bottleneck, (3, 4, 6, 3)),
- 101: (Bottleneck, (3, 4, 23, 3)),
- 152: (Bottleneck, (3, 8, 36, 3))
- }
-
- def __init__(self,
- depth,
- num_stages=4,
- strides=(1, 2, 2, 2),
- dilations=(1, 1, 1, 1),
- out_indices=(0, 1, 2, 3),
- style='pytorch',
- frozen_stages=-1,
- bn_eval=True,
- bn_frozen=False,
- with_cp=False):
- super(ResNet, self).__init__()
- if depth not in self.arch_settings:
- raise KeyError(f'invalid depth {depth} for resnet')
- assert num_stages >= 1 and num_stages <= 4
- block, stage_blocks = self.arch_settings[depth]
- stage_blocks = stage_blocks[:num_stages]
- assert len(strides) == len(dilations) == num_stages
- assert max(out_indices) < num_stages
-
- self.out_indices = out_indices
- self.style = style
- self.frozen_stages = frozen_stages
- self.bn_eval = bn_eval
- self.bn_frozen = bn_frozen
- self.with_cp = with_cp
-
- self.inplanes = 64
- self.conv1 = nn.Conv2d(
- 3, 64, kernel_size=7, stride=2, padding=3, bias=False)
- self.bn1 = nn.BatchNorm2d(64)
- self.relu = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- self.res_layers = []
- for i, num_blocks in enumerate(stage_blocks):
- stride = strides[i]
- dilation = dilations[i]
- planes = 64 * 2**i
- res_layer = make_res_layer(
- block,
- self.inplanes,
- planes,
- num_blocks,
- stride=stride,
- dilation=dilation,
- style=self.style,
- with_cp=with_cp)
- self.inplanes = planes * block.expansion
- layer_name = f'layer{i + 1}'
- self.add_module(layer_name, res_layer)
- self.res_layers.append(layer_name)
-
- self.feat_dim = block.expansion * 64 * 2**(len(stage_blocks) - 1)
-
- def init_weights(self, pretrained=None):
- if isinstance(pretrained, str):
- logger = logging.getLogger()
- from ..runner import load_checkpoint
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, nn.BatchNorm2d):
- constant_init(m, 1)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x = self.maxpool(x)
- outs = []
- for i, layer_name in enumerate(self.res_layers):
- res_layer = getattr(self, layer_name)
- x = res_layer(x)
- if i in self.out_indices:
- outs.append(x)
- if len(outs) == 1:
- return outs[0]
- else:
- return tuple(outs)
-
- def train(self, mode=True):
- super(ResNet, self).train(mode)
- if self.bn_eval:
- for m in self.modules():
- if isinstance(m, nn.BatchNorm2d):
- m.eval()
- if self.bn_frozen:
- for params in m.parameters():
- params.requires_grad = False
- if mode and self.frozen_stages >= 0:
- for param in self.conv1.parameters():
- param.requires_grad = False
- for param in self.bn1.parameters():
- param.requires_grad = False
- self.bn1.eval()
- self.bn1.weight.requires_grad = False
- self.bn1.bias.requires_grad = False
- for i in range(1, self.frozen_stages + 1):
- mod = getattr(self, f'layer{i}')
- mod.eval()
- for param in mod.parameters():
- param.requires_grad = False
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/lr_updater.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/lr_updater.py
deleted file mode 100644
index 6365908ddf6070086de2ffc0afada46ed2f32256..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/lr_updater.py
+++ /dev/null
@@ -1,670 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import numbers
-from math import cos, pi
-
-import annotator.uniformer.mmcv as mmcv
-from .hook import HOOKS, Hook
-
-
-class LrUpdaterHook(Hook):
- """LR Scheduler in MMCV.
-
- Args:
- by_epoch (bool): LR changes epoch by epoch
- warmup (string): Type of warmup used. It can be None(use no warmup),
- 'constant', 'linear' or 'exp'
- warmup_iters (int): The number of iterations or epochs that warmup
- lasts
- warmup_ratio (float): LR used at the beginning of warmup equals to
- warmup_ratio * initial_lr
- warmup_by_epoch (bool): When warmup_by_epoch == True, warmup_iters
- means the number of epochs that warmup lasts, otherwise means the
- number of iteration that warmup lasts
- """
-
- def __init__(self,
- by_epoch=True,
- warmup=None,
- warmup_iters=0,
- warmup_ratio=0.1,
- warmup_by_epoch=False):
- # validate the "warmup" argument
- if warmup is not None:
- if warmup not in ['constant', 'linear', 'exp']:
- raise ValueError(
- f'"{warmup}" is not a supported type for warming up, valid'
- ' types are "constant" and "linear"')
- if warmup is not None:
- assert warmup_iters > 0, \
- '"warmup_iters" must be a positive integer'
- assert 0 < warmup_ratio <= 1.0, \
- '"warmup_ratio" must be in range (0,1]'
-
- self.by_epoch = by_epoch
- self.warmup = warmup
- self.warmup_iters = warmup_iters
- self.warmup_ratio = warmup_ratio
- self.warmup_by_epoch = warmup_by_epoch
-
- if self.warmup_by_epoch:
- self.warmup_epochs = self.warmup_iters
- self.warmup_iters = None
- else:
- self.warmup_epochs = None
-
- self.base_lr = [] # initial lr for all param groups
- self.regular_lr = [] # expected lr if no warming up is performed
-
- def _set_lr(self, runner, lr_groups):
- if isinstance(runner.optimizer, dict):
- for k, optim in runner.optimizer.items():
- for param_group, lr in zip(optim.param_groups, lr_groups[k]):
- param_group['lr'] = lr
- else:
- for param_group, lr in zip(runner.optimizer.param_groups,
- lr_groups):
- param_group['lr'] = lr
-
- def get_lr(self, runner, base_lr):
- raise NotImplementedError
-
- def get_regular_lr(self, runner):
- if isinstance(runner.optimizer, dict):
- lr_groups = {}
- for k in runner.optimizer.keys():
- _lr_group = [
- self.get_lr(runner, _base_lr)
- for _base_lr in self.base_lr[k]
- ]
- lr_groups.update({k: _lr_group})
-
- return lr_groups
- else:
- return [self.get_lr(runner, _base_lr) for _base_lr in self.base_lr]
-
- def get_warmup_lr(self, cur_iters):
-
- def _get_warmup_lr(cur_iters, regular_lr):
- if self.warmup == 'constant':
- warmup_lr = [_lr * self.warmup_ratio for _lr in regular_lr]
- elif self.warmup == 'linear':
- k = (1 - cur_iters / self.warmup_iters) * (1 -
- self.warmup_ratio)
- warmup_lr = [_lr * (1 - k) for _lr in regular_lr]
- elif self.warmup == 'exp':
- k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters)
- warmup_lr = [_lr * k for _lr in regular_lr]
- return warmup_lr
-
- if isinstance(self.regular_lr, dict):
- lr_groups = {}
- for key, regular_lr in self.regular_lr.items():
- lr_groups[key] = _get_warmup_lr(cur_iters, regular_lr)
- return lr_groups
- else:
- return _get_warmup_lr(cur_iters, self.regular_lr)
-
- def before_run(self, runner):
- # NOTE: when resuming from a checkpoint, if 'initial_lr' is not saved,
- # it will be set according to the optimizer params
- if isinstance(runner.optimizer, dict):
- self.base_lr = {}
- for k, optim in runner.optimizer.items():
- for group in optim.param_groups:
- group.setdefault('initial_lr', group['lr'])
- _base_lr = [
- group['initial_lr'] for group in optim.param_groups
- ]
- self.base_lr.update({k: _base_lr})
- else:
- for group in runner.optimizer.param_groups:
- group.setdefault('initial_lr', group['lr'])
- self.base_lr = [
- group['initial_lr'] for group in runner.optimizer.param_groups
- ]
-
- def before_train_epoch(self, runner):
- if self.warmup_iters is None:
- epoch_len = len(runner.data_loader)
- self.warmup_iters = self.warmup_epochs * epoch_len
-
- if not self.by_epoch:
- return
-
- self.regular_lr = self.get_regular_lr(runner)
- self._set_lr(runner, self.regular_lr)
-
- def before_train_iter(self, runner):
- cur_iter = runner.iter
- if not self.by_epoch:
- self.regular_lr = self.get_regular_lr(runner)
- if self.warmup is None or cur_iter >= self.warmup_iters:
- self._set_lr(runner, self.regular_lr)
- else:
- warmup_lr = self.get_warmup_lr(cur_iter)
- self._set_lr(runner, warmup_lr)
- elif self.by_epoch:
- if self.warmup is None or cur_iter > self.warmup_iters:
- return
- elif cur_iter == self.warmup_iters:
- self._set_lr(runner, self.regular_lr)
- else:
- warmup_lr = self.get_warmup_lr(cur_iter)
- self._set_lr(runner, warmup_lr)
-
-
-@HOOKS.register_module()
-class FixedLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, **kwargs):
- super(FixedLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- return base_lr
-
-
-@HOOKS.register_module()
-class StepLrUpdaterHook(LrUpdaterHook):
- """Step LR scheduler with min_lr clipping.
-
- Args:
- step (int | list[int]): Step to decay the LR. If an int value is given,
- regard it as the decay interval. If a list is given, decay LR at
- these steps.
- gamma (float, optional): Decay LR ratio. Default: 0.1.
- min_lr (float, optional): Minimum LR value to keep. If LR after decay
- is lower than `min_lr`, it will be clipped to this value. If None
- is given, we don't perform lr clipping. Default: None.
- """
-
- def __init__(self, step, gamma=0.1, min_lr=None, **kwargs):
- if isinstance(step, list):
- assert mmcv.is_list_of(step, int)
- assert all([s > 0 for s in step])
- elif isinstance(step, int):
- assert step > 0
- else:
- raise TypeError('"step" must be a list or integer')
- self.step = step
- self.gamma = gamma
- self.min_lr = min_lr
- super(StepLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
-
- # calculate exponential term
- if isinstance(self.step, int):
- exp = progress // self.step
- else:
- exp = len(self.step)
- for i, s in enumerate(self.step):
- if progress < s:
- exp = i
- break
-
- lr = base_lr * (self.gamma**exp)
- if self.min_lr is not None:
- # clip to a minimum value
- lr = max(lr, self.min_lr)
- return lr
-
-
-@HOOKS.register_module()
-class ExpLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, gamma, **kwargs):
- self.gamma = gamma
- super(ExpLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
- return base_lr * self.gamma**progress
-
-
-@HOOKS.register_module()
-class PolyLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, power=1., min_lr=0., **kwargs):
- self.power = power
- self.min_lr = min_lr
- super(PolyLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- max_progress = runner.max_epochs
- else:
- progress = runner.iter
- max_progress = runner.max_iters
- coeff = (1 - progress / max_progress)**self.power
- return (base_lr - self.min_lr) * coeff + self.min_lr
-
-
-@HOOKS.register_module()
-class InvLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, gamma, power=1., **kwargs):
- self.gamma = gamma
- self.power = power
- super(InvLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- progress = runner.epoch if self.by_epoch else runner.iter
- return base_lr * (1 + self.gamma * progress)**(-self.power)
-
-
-@HOOKS.register_module()
-class CosineAnnealingLrUpdaterHook(LrUpdaterHook):
-
- def __init__(self, min_lr=None, min_lr_ratio=None, **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- super(CosineAnnealingLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- max_progress = runner.max_epochs
- else:
- progress = runner.iter
- max_progress = runner.max_iters
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
- return annealing_cos(base_lr, target_lr, progress / max_progress)
-
-
-@HOOKS.register_module()
-class FlatCosineAnnealingLrUpdaterHook(LrUpdaterHook):
- """Flat + Cosine lr schedule.
-
- Modified from https://github.com/fastai/fastai/blob/master/fastai/callback/schedule.py#L128 # noqa: E501
-
- Args:
- start_percent (float): When to start annealing the learning rate
- after the percentage of the total training steps.
- The value should be in range [0, 1).
- Default: 0.75
- min_lr (float, optional): The minimum lr. Default: None.
- min_lr_ratio (float, optional): The ratio of minimum lr to the base lr.
- Either `min_lr` or `min_lr_ratio` should be specified.
- Default: None.
- """
-
- def __init__(self,
- start_percent=0.75,
- min_lr=None,
- min_lr_ratio=None,
- **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- if start_percent < 0 or start_percent > 1 or not isinstance(
- start_percent, float):
- raise ValueError(
- 'expected float between 0 and 1 start_percent, but '
- f'got {start_percent}')
- self.start_percent = start_percent
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- super(FlatCosineAnnealingLrUpdaterHook, self).__init__(**kwargs)
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- start = round(runner.max_epochs * self.start_percent)
- progress = runner.epoch - start
- max_progress = runner.max_epochs - start
- else:
- start = round(runner.max_iters * self.start_percent)
- progress = runner.iter - start
- max_progress = runner.max_iters - start
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
-
- if progress < 0:
- return base_lr
- else:
- return annealing_cos(base_lr, target_lr, progress / max_progress)
-
-
-@HOOKS.register_module()
-class CosineRestartLrUpdaterHook(LrUpdaterHook):
- """Cosine annealing with restarts learning rate scheme.
-
- Args:
- periods (list[int]): Periods for each cosine anneling cycle.
- restart_weights (list[float], optional): Restart weights at each
- restart iteration. Default: [1].
- min_lr (float, optional): The minimum lr. Default: None.
- min_lr_ratio (float, optional): The ratio of minimum lr to the base lr.
- Either `min_lr` or `min_lr_ratio` should be specified.
- Default: None.
- """
-
- def __init__(self,
- periods,
- restart_weights=[1],
- min_lr=None,
- min_lr_ratio=None,
- **kwargs):
- assert (min_lr is None) ^ (min_lr_ratio is None)
- self.periods = periods
- self.min_lr = min_lr
- self.min_lr_ratio = min_lr_ratio
- self.restart_weights = restart_weights
- assert (len(self.periods) == len(self.restart_weights)
- ), 'periods and restart_weights should have the same length.'
- super(CosineRestartLrUpdaterHook, self).__init__(**kwargs)
-
- self.cumulative_periods = [
- sum(self.periods[0:i + 1]) for i in range(0, len(self.periods))
- ]
-
- def get_lr(self, runner, base_lr):
- if self.by_epoch:
- progress = runner.epoch
- else:
- progress = runner.iter
-
- if self.min_lr_ratio is not None:
- target_lr = base_lr * self.min_lr_ratio
- else:
- target_lr = self.min_lr
-
- idx = get_position_from_periods(progress, self.cumulative_periods)
- current_weight = self.restart_weights[idx]
- nearest_restart = 0 if idx == 0 else self.cumulative_periods[idx - 1]
- current_periods = self.periods[idx]
-
- alpha = min((progress - nearest_restart) / current_periods, 1)
- return annealing_cos(base_lr, target_lr, alpha, current_weight)
-
-
-def get_position_from_periods(iteration, cumulative_periods):
- """Get the position from a period list.
-
- It will return the index of the right-closest number in the period list.
- For example, the cumulative_periods = [100, 200, 300, 400],
- if iteration == 50, return 0;
- if iteration == 210, return 2;
- if iteration == 300, return 3.
-
- Args:
- iteration (int): Current iteration.
- cumulative_periods (list[int]): Cumulative period list.
-
- Returns:
- int: The position of the right-closest number in the period list.
- """
- for i, period in enumerate(cumulative_periods):
- if iteration < period:
- return i
- raise ValueError(f'Current iteration {iteration} exceeds '
- f'cumulative_periods {cumulative_periods}')
-
-
-@HOOKS.register_module()
-class CyclicLrUpdaterHook(LrUpdaterHook):
- """Cyclic LR Scheduler.
-
- Implement the cyclical learning rate policy (CLR) described in
- https://arxiv.org/pdf/1506.01186.pdf
-
- Different from the original paper, we use cosine annealing rather than
- triangular policy inside a cycle. This improves the performance in the
- 3D detection area.
-
- Args:
- by_epoch (bool): Whether to update LR by epoch.
- target_ratio (tuple[float]): Relative ratio of the highest LR and the
- lowest LR to the initial LR.
- cyclic_times (int): Number of cycles during training
- step_ratio_up (float): The ratio of the increasing process of LR in
- the total cycle.
- anneal_strategy (str): {'cos', 'linear'}
- Specifies the annealing strategy: 'cos' for cosine annealing,
- 'linear' for linear annealing. Default: 'cos'.
- """
-
- def __init__(self,
- by_epoch=False,
- target_ratio=(10, 1e-4),
- cyclic_times=1,
- step_ratio_up=0.4,
- anneal_strategy='cos',
- **kwargs):
- if isinstance(target_ratio, float):
- target_ratio = (target_ratio, target_ratio / 1e5)
- elif isinstance(target_ratio, tuple):
- target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \
- if len(target_ratio) == 1 else target_ratio
- else:
- raise ValueError('target_ratio should be either float '
- f'or tuple, got {type(target_ratio)}')
-
- assert len(target_ratio) == 2, \
- '"target_ratio" must be list or tuple of two floats'
- assert 0 <= step_ratio_up < 1.0, \
- '"step_ratio_up" must be in range [0,1)'
-
- self.target_ratio = target_ratio
- self.cyclic_times = cyclic_times
- self.step_ratio_up = step_ratio_up
- self.lr_phases = [] # init lr_phases
- # validate anneal_strategy
- if anneal_strategy not in ['cos', 'linear']:
- raise ValueError('anneal_strategy must be one of "cos" or '
- f'"linear", instead got {anneal_strategy}')
- elif anneal_strategy == 'cos':
- self.anneal_func = annealing_cos
- elif anneal_strategy == 'linear':
- self.anneal_func = annealing_linear
-
- assert not by_epoch, \
- 'currently only support "by_epoch" = False'
- super(CyclicLrUpdaterHook, self).__init__(by_epoch, **kwargs)
-
- def before_run(self, runner):
- super(CyclicLrUpdaterHook, self).before_run(runner)
- # initiate lr_phases
- # total lr_phases are separated as up and down
- max_iter_per_phase = runner.max_iters // self.cyclic_times
- iter_up_phase = int(self.step_ratio_up * max_iter_per_phase)
- self.lr_phases.append(
- [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]])
- self.lr_phases.append([
- iter_up_phase, max_iter_per_phase, max_iter_per_phase,
- self.target_ratio[0], self.target_ratio[1]
- ])
-
- def get_lr(self, runner, base_lr):
- curr_iter = runner.iter
- for (start_iter, end_iter, max_iter_per_phase, start_ratio,
- end_ratio) in self.lr_phases:
- curr_iter %= max_iter_per_phase
- if start_iter <= curr_iter < end_iter:
- progress = curr_iter - start_iter
- return self.anneal_func(base_lr * start_ratio,
- base_lr * end_ratio,
- progress / (end_iter - start_iter))
-
-
-@HOOKS.register_module()
-class OneCycleLrUpdaterHook(LrUpdaterHook):
- """One Cycle LR Scheduler.
-
- The 1cycle learning rate policy changes the learning rate after every
- batch. The one cycle learning rate policy is described in
- https://arxiv.org/pdf/1708.07120.pdf
-
- Args:
- max_lr (float or list): Upper learning rate boundaries in the cycle
- for each parameter group.
- total_steps (int, optional): The total number of steps in the cycle.
- Note that if a value is not provided here, it will be the max_iter
- of runner. Default: None.
- pct_start (float): The percentage of the cycle (in number of steps)
- spent increasing the learning rate.
- Default: 0.3
- anneal_strategy (str): {'cos', 'linear'}
- Specifies the annealing strategy: 'cos' for cosine annealing,
- 'linear' for linear annealing.
- Default: 'cos'
- div_factor (float): Determines the initial learning rate via
- initial_lr = max_lr/div_factor
- Default: 25
- final_div_factor (float): Determines the minimum learning rate via
- min_lr = initial_lr/final_div_factor
- Default: 1e4
- three_phase (bool): If three_phase is True, use a third phase of the
- schedule to annihilate the learning rate according to
- final_div_factor instead of modifying the second phase (the first
- two phases will be symmetrical about the step indicated by
- pct_start).
- Default: False
- """
-
- def __init__(self,
- max_lr,
- total_steps=None,
- pct_start=0.3,
- anneal_strategy='cos',
- div_factor=25,
- final_div_factor=1e4,
- three_phase=False,
- **kwargs):
- # validate by_epoch, currently only support by_epoch = False
- if 'by_epoch' not in kwargs:
- kwargs['by_epoch'] = False
- else:
- assert not kwargs['by_epoch'], \
- 'currently only support "by_epoch" = False'
- if not isinstance(max_lr, (numbers.Number, list, dict)):
- raise ValueError('the type of max_lr must be the one of list or '
- f'dict, but got {type(max_lr)}')
- self._max_lr = max_lr
- if total_steps is not None:
- if not isinstance(total_steps, int):
- raise ValueError('the type of total_steps must be int, but'
- f'got {type(total_steps)}')
- self.total_steps = total_steps
- # validate pct_start
- if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float):
- raise ValueError('expected float between 0 and 1 pct_start, but '
- f'got {pct_start}')
- self.pct_start = pct_start
- # validate anneal_strategy
- if anneal_strategy not in ['cos', 'linear']:
- raise ValueError('anneal_strategy must be one of "cos" or '
- f'"linear", instead got {anneal_strategy}')
- elif anneal_strategy == 'cos':
- self.anneal_func = annealing_cos
- elif anneal_strategy == 'linear':
- self.anneal_func = annealing_linear
- self.div_factor = div_factor
- self.final_div_factor = final_div_factor
- self.three_phase = three_phase
- self.lr_phases = [] # init lr_phases
- super(OneCycleLrUpdaterHook, self).__init__(**kwargs)
-
- def before_run(self, runner):
- if hasattr(self, 'total_steps'):
- total_steps = self.total_steps
- else:
- total_steps = runner.max_iters
- if total_steps < runner.max_iters:
- raise ValueError(
- 'The total steps must be greater than or equal to max '
- f'iterations {runner.max_iters} of runner, but total steps '
- f'is {total_steps}.')
-
- if isinstance(runner.optimizer, dict):
- self.base_lr = {}
- for k, optim in runner.optimizer.items():
- _max_lr = format_param(k, optim, self._max_lr)
- self.base_lr[k] = [lr / self.div_factor for lr in _max_lr]
- for group, lr in zip(optim.param_groups, self.base_lr[k]):
- group.setdefault('initial_lr', lr)
- else:
- k = type(runner.optimizer).__name__
- _max_lr = format_param(k, runner.optimizer, self._max_lr)
- self.base_lr = [lr / self.div_factor for lr in _max_lr]
- for group, lr in zip(runner.optimizer.param_groups, self.base_lr):
- group.setdefault('initial_lr', lr)
-
- if self.three_phase:
- self.lr_phases.append(
- [float(self.pct_start * total_steps) - 1, 1, self.div_factor])
- self.lr_phases.append([
- float(2 * self.pct_start * total_steps) - 2, self.div_factor, 1
- ])
- self.lr_phases.append(
- [total_steps - 1, 1, 1 / self.final_div_factor])
- else:
- self.lr_phases.append(
- [float(self.pct_start * total_steps) - 1, 1, self.div_factor])
- self.lr_phases.append(
- [total_steps - 1, self.div_factor, 1 / self.final_div_factor])
-
- def get_lr(self, runner, base_lr):
- curr_iter = runner.iter
- start_iter = 0
- for i, (end_iter, start_lr, end_lr) in enumerate(self.lr_phases):
- if curr_iter <= end_iter:
- pct = (curr_iter - start_iter) / (end_iter - start_iter)
- lr = self.anneal_func(base_lr * start_lr, base_lr * end_lr,
- pct)
- break
- start_iter = end_iter
- return lr
-
-
-def annealing_cos(start, end, factor, weight=1):
- """Calculate annealing cos learning rate.
-
- Cosine anneal from `weight * start + (1 - weight) * end` to `end` as
- percentage goes from 0.0 to 1.0.
-
- Args:
- start (float): The starting learning rate of the cosine annealing.
- end (float): The ending learing rate of the cosine annealing.
- factor (float): The coefficient of `pi` when calculating the current
- percentage. Range from 0.0 to 1.0.
- weight (float, optional): The combination factor of `start` and `end`
- when calculating the actual starting learning rate. Default to 1.
- """
- cos_out = cos(pi * factor) + 1
- return end + 0.5 * weight * (start - end) * cos_out
-
-
-def annealing_linear(start, end, factor):
- """Calculate annealing linear learning rate.
-
- Linear anneal from `start` to `end` as percentage goes from 0.0 to 1.0.
-
- Args:
- start (float): The starting learning rate of the linear annealing.
- end (float): The ending learing rate of the linear annealing.
- factor (float): The coefficient of `pi` when calculating the current
- percentage. Range from 0.0 to 1.0.
- """
- return start + (end - start) * factor
-
-
-def format_param(name, optim, param):
- if isinstance(param, numbers.Number):
- return [param] * len(optim.param_groups)
- elif isinstance(param, (list, tuple)): # multi param groups
- if len(param) != len(optim.param_groups):
- raise ValueError(f'expected {len(optim.param_groups)} '
- f'values for {name}, got {len(param)}')
- return param
- else: # multi optimizers
- if name not in param:
- raise KeyError(f'{name} is not found in {param.keys()}')
- return param[name]
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/dataset_wrappers.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/dataset_wrappers.py
deleted file mode 100644
index 55ad5cb60e581a96bdbd1fbbeebc2f46f8c4e899..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/dataset_wrappers.py
+++ /dev/null
@@ -1,282 +0,0 @@
-import bisect
-import math
-from collections import defaultdict
-
-import numpy as np
-from mmcv.utils import print_log
-from torch.utils.data.dataset import ConcatDataset as _ConcatDataset
-
-from .builder import DATASETS
-from .coco import CocoDataset
-
-
-@DATASETS.register_module()
-class ConcatDataset(_ConcatDataset):
- """A wrapper of concatenated dataset.
-
- Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but
- concat the group flag for image aspect ratio.
-
- Args:
- datasets (list[:obj:`Dataset`]): A list of datasets.
- separate_eval (bool): Whether to evaluate the results
- separately if it is used as validation dataset.
- Defaults to True.
- """
-
- def __init__(self, datasets, separate_eval=True):
- super(ConcatDataset, self).__init__(datasets)
- self.CLASSES = datasets[0].CLASSES
- self.separate_eval = separate_eval
- if not separate_eval:
- if any([isinstance(ds, CocoDataset) for ds in datasets]):
- raise NotImplementedError(
- 'Evaluating concatenated CocoDataset as a whole is not'
- ' supported! Please set "separate_eval=True"')
- elif len(set([type(ds) for ds in datasets])) != 1:
- raise NotImplementedError(
- 'All the datasets should have same types')
-
- if hasattr(datasets[0], 'flag'):
- flags = []
- for i in range(0, len(datasets)):
- flags.append(datasets[i].flag)
- self.flag = np.concatenate(flags)
-
- def get_cat_ids(self, idx):
- """Get category ids of concatenated dataset by index.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- list[int]: All categories in the image of specified index.
- """
-
- if idx < 0:
- if -idx > len(self):
- raise ValueError(
- 'absolute value of index should not exceed dataset length')
- idx = len(self) + idx
- dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)
- if dataset_idx == 0:
- sample_idx = idx
- else:
- sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]
- return self.datasets[dataset_idx].get_cat_ids(sample_idx)
-
- def evaluate(self, results, logger=None, **kwargs):
- """Evaluate the results.
-
- Args:
- results (list[list | tuple]): Testing results of the dataset.
- logger (logging.Logger | str | None): Logger used for printing
- related information during evaluation. Default: None.
-
- Returns:
- dict[str: float]: AP results of the total dataset or each separate
- dataset if `self.separate_eval=True`.
- """
- assert len(results) == self.cumulative_sizes[-1], \
- ('Dataset and results have different sizes: '
- f'{self.cumulative_sizes[-1]} v.s. {len(results)}')
-
- # Check whether all the datasets support evaluation
- for dataset in self.datasets:
- assert hasattr(dataset, 'evaluate'), \
- f'{type(dataset)} does not implement evaluate function'
-
- if self.separate_eval:
- dataset_idx = -1
- total_eval_results = dict()
- for size, dataset in zip(self.cumulative_sizes, self.datasets):
- start_idx = 0 if dataset_idx == -1 else \
- self.cumulative_sizes[dataset_idx]
- end_idx = self.cumulative_sizes[dataset_idx + 1]
-
- results_per_dataset = results[start_idx:end_idx]
- print_log(
- f'\nEvaluateing {dataset.ann_file} with '
- f'{len(results_per_dataset)} images now',
- logger=logger)
-
- eval_results_per_dataset = dataset.evaluate(
- results_per_dataset, logger=logger, **kwargs)
- dataset_idx += 1
- for k, v in eval_results_per_dataset.items():
- total_eval_results.update({f'{dataset_idx}_{k}': v})
-
- return total_eval_results
- elif any([isinstance(ds, CocoDataset) for ds in self.datasets]):
- raise NotImplementedError(
- 'Evaluating concatenated CocoDataset as a whole is not'
- ' supported! Please set "separate_eval=True"')
- elif len(set([type(ds) for ds in self.datasets])) != 1:
- raise NotImplementedError(
- 'All the datasets should have same types')
- else:
- original_data_infos = self.datasets[0].data_infos
- self.datasets[0].data_infos = sum(
- [dataset.data_infos for dataset in self.datasets], [])
- eval_results = self.datasets[0].evaluate(
- results, logger=logger, **kwargs)
- self.datasets[0].data_infos = original_data_infos
- return eval_results
-
-
-@DATASETS.register_module()
-class RepeatDataset(object):
- """A wrapper of repeated dataset.
-
- The length of repeated dataset will be `times` larger than the original
- dataset. This is useful when the data loading time is long but the dataset
- is small. Using RepeatDataset can reduce the data loading time between
- epochs.
-
- Args:
- dataset (:obj:`Dataset`): The dataset to be repeated.
- times (int): Repeat times.
- """
-
- def __init__(self, dataset, times):
- self.dataset = dataset
- self.times = times
- self.CLASSES = dataset.CLASSES
- if hasattr(self.dataset, 'flag'):
- self.flag = np.tile(self.dataset.flag, times)
-
- self._ori_len = len(self.dataset)
-
- def __getitem__(self, idx):
- return self.dataset[idx % self._ori_len]
-
- def get_cat_ids(self, idx):
- """Get category ids of repeat dataset by index.
-
- Args:
- idx (int): Index of data.
-
- Returns:
- list[int]: All categories in the image of specified index.
- """
-
- return self.dataset.get_cat_ids(idx % self._ori_len)
-
- def __len__(self):
- """Length after repetition."""
- return self.times * self._ori_len
-
-
-# Modified from https://github.com/facebookresearch/detectron2/blob/41d475b75a230221e21d9cac5d69655e3415e3a4/detectron2/data/samplers/distributed_sampler.py#L57 # noqa
-@DATASETS.register_module()
-class ClassBalancedDataset(object):
- """A wrapper of repeated dataset with repeat factor.
-
- Suitable for training on class imbalanced datasets like LVIS. Following
- the sampling strategy in the `paper `_,
- in each epoch, an image may appear multiple times based on its
- "repeat factor".
- The repeat factor for an image is a function of the frequency the rarest
- category labeled in that image. The "frequency of category c" in [0, 1]
- is defined by the fraction of images in the training set (without repeats)
- in which category c appears.
- The dataset needs to instantiate :func:`self.get_cat_ids` to support
- ClassBalancedDataset.
-
- The repeat factor is computed as followed.
-
- 1. For each category c, compute the fraction # of images
- that contain it: :math:`f(c)`
- 2. For each category c, compute the category-level repeat factor:
- :math:`r(c) = max(1, sqrt(t/f(c)))`
- 3. For each image I, compute the image-level repeat factor:
- :math:`r(I) = max_{c in I} r(c)`
-
- Args:
- dataset (:obj:`CustomDataset`): The dataset to be repeated.
- oversample_thr (float): frequency threshold below which data is
- repeated. For categories with ``f_c >= oversample_thr``, there is
- no oversampling. For categories with ``f_c < oversample_thr``, the
- degree of oversampling following the square-root inverse frequency
- heuristic above.
- filter_empty_gt (bool, optional): If set true, images without bounding
- boxes will not be oversampled. Otherwise, they will be categorized
- as the pure background class and involved into the oversampling.
- Default: True.
- """
-
- def __init__(self, dataset, oversample_thr, filter_empty_gt=True):
- self.dataset = dataset
- self.oversample_thr = oversample_thr
- self.filter_empty_gt = filter_empty_gt
- self.CLASSES = dataset.CLASSES
-
- repeat_factors = self._get_repeat_factors(dataset, oversample_thr)
- repeat_indices = []
- for dataset_idx, repeat_factor in enumerate(repeat_factors):
- repeat_indices.extend([dataset_idx] * math.ceil(repeat_factor))
- self.repeat_indices = repeat_indices
-
- flags = []
- if hasattr(self.dataset, 'flag'):
- for flag, repeat_factor in zip(self.dataset.flag, repeat_factors):
- flags.extend([flag] * int(math.ceil(repeat_factor)))
- assert len(flags) == len(repeat_indices)
- self.flag = np.asarray(flags, dtype=np.uint8)
-
- def _get_repeat_factors(self, dataset, repeat_thr):
- """Get repeat factor for each images in the dataset.
-
- Args:
- dataset (:obj:`CustomDataset`): The dataset
- repeat_thr (float): The threshold of frequency. If an image
- contains the categories whose frequency below the threshold,
- it would be repeated.
-
- Returns:
- list[float]: The repeat factors for each images in the dataset.
- """
-
- # 1. For each category c, compute the fraction # of images
- # that contain it: f(c)
- category_freq = defaultdict(int)
- num_images = len(dataset)
- for idx in range(num_images):
- cat_ids = set(self.dataset.get_cat_ids(idx))
- if len(cat_ids) == 0 and not self.filter_empty_gt:
- cat_ids = set([len(self.CLASSES)])
- for cat_id in cat_ids:
- category_freq[cat_id] += 1
- for k, v in category_freq.items():
- category_freq[k] = v / num_images
-
- # 2. For each category c, compute the category-level repeat factor:
- # r(c) = max(1, sqrt(t/f(c)))
- category_repeat = {
- cat_id: max(1.0, math.sqrt(repeat_thr / cat_freq))
- for cat_id, cat_freq in category_freq.items()
- }
-
- # 3. For each image I, compute the image-level repeat factor:
- # r(I) = max_{c in I} r(c)
- repeat_factors = []
- for idx in range(num_images):
- cat_ids = set(self.dataset.get_cat_ids(idx))
- if len(cat_ids) == 0 and not self.filter_empty_gt:
- cat_ids = set([len(self.CLASSES)])
- repeat_factor = 1
- if len(cat_ids) > 0:
- repeat_factor = max(
- {category_repeat[cat_id]
- for cat_id in cat_ids})
- repeat_factors.append(repeat_factor)
-
- return repeat_factors
-
- def __getitem__(self, idx):
- ori_index = self.repeat_indices[idx]
- return self.dataset[ori_index]
-
- def __len__(self):
- """Length after repetition."""
- return len(self.repeat_indices)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/swin_transformer.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/swin_transformer.py
deleted file mode 100644
index bb41850d8480a08a6a7698bf6129ffd1ab239681..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/swin_transformer.py
+++ /dev/null
@@ -1,630 +0,0 @@
-# --------------------------------------------------------
-# Swin Transformer
-# Copyright (c) 2021 Microsoft
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Ze Liu, Yutong Lin, Yixuan Wei
-# --------------------------------------------------------
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-import numpy as np
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-from mmcv_custom import load_checkpoint
-from mmdet.utils import get_root_logger
-from ..builder import BACKBONES
-
-
-class Mlp(nn.Module):
- """ Multilayer perceptron."""
-
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
-
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
-
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- """ Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
-
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """ Forward function.
-
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class SwinTransformerBlock(nn.Module):
- """ Swin Transformer Block.
-
- Args:
- dim (int): Number of input channels.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, num_heads, window_size=7, shift_size=0,
- mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- self.H = None
- self.W = None
-
- def forward(self, x, mask_matrix):
- """ Forward function.
-
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- mask_matrix: Attention mask for cyclic shift.
- """
- B, L, C = x.shape
- H, W = self.H, self.W
- assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # pad feature maps to multiples of window size
- pad_l = pad_t = 0
- pad_r = (self.window_size - W % self.window_size) % self.window_size
- pad_b = (self.window_size - H % self.window_size) % self.window_size
- x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
- _, Hp, Wp, _ = x.shape
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- attn_mask = mask_matrix
- else:
- shifted_x = x
- attn_mask = None
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
-
- if pad_r > 0 or pad_b > 0:
- x = x[:, :H, :W, :].contiguous()
-
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x
-
-
-class PatchMerging(nn.Module):
- """ Patch Merging Layer
-
- Args:
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
- def __init__(self, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x, H, W):
- """ Forward function.
-
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
-
- x = x.view(B, H, W, C)
-
- # padding
- pad_input = (H % 2 == 1) or (W % 2 == 1)
- if pad_input:
- x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
-
-class BasicLayer(nn.Module):
- """ A basic Swin Transformer layer for one stage.
-
- Args:
- dim (int): Number of feature channels
- depth (int): Depths of this stage.
- num_heads (int): Number of attention head.
- window_size (int): Local window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self,
- dim,
- depth,
- num_heads,
- window_size=7,
- mlp_ratio=4.,
- qkv_bias=True,
- qk_scale=None,
- drop=0.,
- attn_drop=0.,
- drop_path=0.,
- norm_layer=nn.LayerNorm,
- downsample=None,
- use_checkpoint=False):
- super().__init__()
- self.window_size = window_size
- self.shift_size = window_size // 2
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList([
- SwinTransformerBlock(
- dim=dim,
- num_heads=num_heads,
- window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop,
- attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer)
- for i in range(depth)])
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x, H, W):
- """ Forward function.
-
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
-
- # calculate attention mask for SW-MSA
- Hp = int(np.ceil(H / self.window_size)) * self.window_size
- Wp = int(np.ceil(W / self.window_size)) * self.window_size
- img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
-
- for blk in self.blocks:
- blk.H, blk.W = H, W
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, attn_mask)
- else:
- x = blk(x, attn_mask)
- if self.downsample is not None:
- x_down = self.downsample(x, H, W)
- Wh, Ww = (H + 1) // 2, (W + 1) // 2
- return x, H, W, x_down, Wh, Ww
- else:
- return x, H, W, x, H, W
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
-
- Args:
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- patch_size = to_2tuple(patch_size)
- self.patch_size = patch_size
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- """Forward function."""
- # padding
- _, _, H, W = x.size()
- if W % self.patch_size[1] != 0:
- x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
- if H % self.patch_size[0] != 0:
- x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
-
- x = self.proj(x) # B C Wh Ww
- if self.norm is not None:
- Wh, Ww = x.size(2), x.size(3)
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
-
- return x
-
-
-@BACKBONES.register_module()
-class SwinTransformer(nn.Module):
- """ Swin Transformer backbone.
- A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
- https://arxiv.org/pdf/2103.14030
-
- Args:
- pretrain_img_size (int): Input image size for training the pretrained model,
- used in absolute postion embedding. Default 224.
- patch_size (int | tuple(int)): Patch size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- depths (tuple[int]): Depths of each Swin Transformer stage.
- num_heads (tuple[int]): Number of attention head of each stage.
- window_size (int): Window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
- drop_rate (float): Dropout rate.
- attn_drop_rate (float): Attention dropout rate. Default: 0.
- drop_path_rate (float): Stochastic depth rate. Default: 0.2.
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
- patch_norm (bool): If True, add normalization after patch embedding. Default: True.
- out_indices (Sequence[int]): Output from which stages.
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
- -1 means not freezing any parameters.
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(self,
- pretrain_img_size=224,
- patch_size=4,
- in_chans=3,
- embed_dim=96,
- depths=[2, 2, 6, 2],
- num_heads=[3, 6, 12, 24],
- window_size=7,
- mlp_ratio=4.,
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.,
- attn_drop_rate=0.,
- drop_path_rate=0.2,
- norm_layer=nn.LayerNorm,
- ape=False,
- patch_norm=True,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- use_checkpoint=False):
- super().__init__()
-
- self.pretrain_img_size = pretrain_img_size
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.out_indices = out_indices
- self.frozen_stages = frozen_stages
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
-
- # absolute position embedding
- if self.ape:
- pretrain_img_size = to_2tuple(pretrain_img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]]
-
- self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]))
- trunc_normal_(self.absolute_pos_embed, std=.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
-
- # build layers
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = BasicLayer(
- dim=int(embed_dim * 2 ** i_layer),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop_rate,
- attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
- norm_layer=norm_layer,
- downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
- use_checkpoint=use_checkpoint)
- self.layers.append(layer)
-
- num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
- self.num_features = num_features
-
- # add a norm layer for each output
- for i_layer in out_indices:
- layer = norm_layer(num_features[i_layer])
- layer_name = f'norm{i_layer}'
- self.add_module(layer_name, layer)
-
- self._freeze_stages()
-
- def _freeze_stages(self):
- if self.frozen_stages >= 0:
- self.patch_embed.eval()
- for param in self.patch_embed.parameters():
- param.requires_grad = False
-
- if self.frozen_stages >= 1 and self.ape:
- self.absolute_pos_embed.requires_grad = False
-
- if self.frozen_stages >= 2:
- self.pos_drop.eval()
- for i in range(0, self.frozen_stages - 1):
- m = self.layers[i]
- m.eval()
- for param in m.parameters():
- param.requires_grad = False
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
-
- def _init_weights(m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- if isinstance(pretrained, str):
- self.apply(_init_weights)
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- self.apply(_init_weights)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- """Forward function."""
- x = self.patch_embed(x)
-
- Wh, Ww = x.size(2), x.size(3)
- if self.ape:
- # interpolate the position embedding to the corresponding size
- absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
- x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
- else:
- x = x.flatten(2).transpose(1, 2)
- x = self.pos_drop(x)
-
- outs = []
- for i in range(self.num_layers):
- layer = self.layers[i]
- x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
-
- if i in self.out_indices:
- norm_layer = getattr(self, f'norm{i}')
- x_out = norm_layer(x_out)
-
- out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
- outs.append(out)
-
- return tuple(outs)
-
- def train(self, mode=True):
- """Convert the model into training mode while keep layers freezed."""
- super(SwinTransformer, self).train(mode)
- self._freeze_stages()
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/embedding_rpn_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/embedding_rpn_head.py
deleted file mode 100644
index 200ce8d20c5503f98c5c21f30bb9d00437e25f34..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/embedding_rpn_head.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import torch
-import torch.nn as nn
-
-from mmdet.models.builder import HEADS
-from ...core import bbox_cxcywh_to_xyxy
-
-
-@HEADS.register_module()
-class EmbeddingRPNHead(nn.Module):
- """RPNHead in the `Sparse R-CNN `_ .
-
- Unlike traditional RPNHead, this module does not need FPN input, but just
- decode `init_proposal_bboxes` and expand the first dimension of
- `init_proposal_bboxes` and `init_proposal_features` to the batch_size.
-
- Args:
- num_proposals (int): Number of init_proposals. Default 100.
- proposal_feature_channel (int): Channel number of
- init_proposal_feature. Defaults to 256.
- """
-
- def __init__(self,
- num_proposals=100,
- proposal_feature_channel=256,
- **kwargs):
- super(EmbeddingRPNHead, self).__init__()
- self.num_proposals = num_proposals
- self.proposal_feature_channel = proposal_feature_channel
- self._init_layers()
-
- def _init_layers(self):
- """Initialize a sparse set of proposal boxes and proposal features."""
- self.init_proposal_bboxes = nn.Embedding(self.num_proposals, 4)
- self.init_proposal_features = nn.Embedding(
- self.num_proposals, self.proposal_feature_channel)
-
- def init_weights(self):
- """Initialize the init_proposal_bboxes as normalized.
-
- [c_x, c_y, w, h], and we initialize it to the size of the entire
- image.
- """
- nn.init.constant_(self.init_proposal_bboxes.weight[:, :2], 0.5)
- nn.init.constant_(self.init_proposal_bboxes.weight[:, 2:], 1)
-
- def _decode_init_proposals(self, imgs, img_metas):
- """Decode init_proposal_bboxes according to the size of images and
- expand dimension of init_proposal_features to batch_size.
-
- Args:
- imgs (list[Tensor]): List of FPN features.
- img_metas (list[dict]): List of meta-information of
- images. Need the img_shape to decode the init_proposals.
-
- Returns:
- Tuple(Tensor):
-
- - proposals (Tensor): Decoded proposal bboxes,
- has shape (batch_size, num_proposals, 4).
- - init_proposal_features (Tensor): Expanded proposal
- features, has shape
- (batch_size, num_proposals, proposal_feature_channel).
- - imgs_whwh (Tensor): Tensor with shape
- (batch_size, 4), the dimension means
- [img_width, img_height, img_width, img_height].
- """
- proposals = self.init_proposal_bboxes.weight.clone()
- proposals = bbox_cxcywh_to_xyxy(proposals)
- num_imgs = len(imgs[0])
- imgs_whwh = []
- for meta in img_metas:
- h, w, _ = meta['img_shape']
- imgs_whwh.append(imgs[0].new_tensor([[w, h, w, h]]))
- imgs_whwh = torch.cat(imgs_whwh, dim=0)
- imgs_whwh = imgs_whwh[:, None, :]
-
- # imgs_whwh has shape (batch_size, 1, 4)
- # The shape of proposals change from (num_proposals, 4)
- # to (batch_size ,num_proposals, 4)
- proposals = proposals * imgs_whwh
-
- init_proposal_features = self.init_proposal_features.weight.clone()
- init_proposal_features = init_proposal_features[None].expand(
- num_imgs, *init_proposal_features.size())
- return proposals, init_proposal_features, imgs_whwh
-
- def forward_dummy(self, img, img_metas):
- """Dummy forward function.
-
- Used in flops calculation.
- """
- return self._decode_init_proposals(img, img_metas)
-
- def forward_train(self, img, img_metas):
- """Forward function in training stage."""
- return self._decode_init_proposals(img, img_metas)
-
- def simple_test_rpn(self, img, img_metas):
- """Forward function in testing stage."""
- return self._decode_init_proposals(img, img_metas)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/__init__.py
deleted file mode 100644
index ca0a38ec42cd41fbd97e07589a13d1af46f47f2f..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/__init__.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from .base_roi_head import BaseRoIHead
-from .bbox_heads import (BBoxHead, ConvFCBBoxHead, DoubleConvFCBBoxHead,
- SCNetBBoxHead, Shared2FCBBoxHead,
- Shared4Conv1FCBBoxHead)
-from .cascade_roi_head import CascadeRoIHead
-from .double_roi_head import DoubleHeadRoIHead
-from .dynamic_roi_head import DynamicRoIHead
-from .grid_roi_head import GridRoIHead
-from .htc_roi_head import HybridTaskCascadeRoIHead
-from .mask_heads import (CoarseMaskHead, FCNMaskHead, FeatureRelayHead,
- FusedSemanticHead, GlobalContextHead, GridHead,
- HTCMaskHead, MaskIoUHead, MaskPointHead,
- SCNetMaskHead, SCNetSemanticHead)
-from .mask_scoring_roi_head import MaskScoringRoIHead
-from .pisa_roi_head import PISARoIHead
-from .point_rend_roi_head import PointRendRoIHead
-from .roi_extractors import SingleRoIExtractor
-from .scnet_roi_head import SCNetRoIHead
-from .shared_heads import ResLayer
-from .sparse_roi_head import SparseRoIHead
-from .standard_roi_head import StandardRoIHead
-from .trident_roi_head import TridentRoIHead
-
-__all__ = [
- 'BaseRoIHead', 'CascadeRoIHead', 'DoubleHeadRoIHead', 'MaskScoringRoIHead',
- 'HybridTaskCascadeRoIHead', 'GridRoIHead', 'ResLayer', 'BBoxHead',
- 'ConvFCBBoxHead', 'Shared2FCBBoxHead', 'StandardRoIHead',
- 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', 'FCNMaskHead',
- 'HTCMaskHead', 'FusedSemanticHead', 'GridHead', 'MaskIoUHead',
- 'SingleRoIExtractor', 'PISARoIHead', 'PointRendRoIHead', 'MaskPointHead',
- 'CoarseMaskHead', 'DynamicRoIHead', 'SparseRoIHead', 'TridentRoIHead',
- 'SCNetRoIHead', 'SCNetMaskHead', 'SCNetSemanticHead', 'SCNetBBoxHead',
- 'FeatureRelayHead', 'GlobalContextHead'
-]
diff --git a/spaces/Sandiago21/text-to-speech-german/app.py b/spaces/Sandiago21/text-to-speech-german/app.py
deleted file mode 100644
index d59abbad6c1f31d9f026f8151e32987e913ec18f..0000000000000000000000000000000000000000
--- a/spaces/Sandiago21/text-to-speech-german/app.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import gradio as gr
-import torch
-from datasets import load_dataset
-from transformers import pipeline, SpeechT5Processor, SpeechT5HifiGan, SpeechT5ForTextToSpeech
-
-model_id = "Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german" # update with your model id
-model = SpeechT5ForTextToSpeech.from_pretrained(model_id)
-vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
-embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
-speaker_embeddings = torch.tensor(embeddings_dataset[7440]["xvector"]).unsqueeze(0)
-
-processor = SpeechT5Processor.from_pretrained(model_id)
-
-replacements = [
- ("Ä", "E"),
- ("Æ", "E"),
- ("Ç", "C"),
- ("É", "E"),
- ("Í", "I"),
- ("Ó", "O"),
- ("Ö", "E"),
- ("Ü", "Y"),
- ("ß", "S"),
- ("à", "a"),
- ("á", "a"),
- ("ã", "a"),
- ("ä", "e"),
- ("å", "a"),
- ("ë", "e"),
- ("í", "i"),
- ("ï", "i"),
- ("ð", "o"),
- ("ñ", "n"),
- ("ò", "o"),
- ("ó", "o"),
- ("ô", "o"),
- ("ö", "u"),
- ("ú", "u"),
- ("ü", "y"),
- ("ý", "y"),
- ("Ā", "A"),
- ("ā", "a"),
- ("ă", "a"),
- ("ą", "a"),
- ("ć", "c"),
- ("Č", "C"),
- ("č", "c"),
- ("ď", "d"),
- ("Đ", "D"),
- ("ę", "e"),
- ("ě", "e"),
- ("ğ", "g"),
- ("İ", "I"),
- ("О", "O"),
- ("Ł", "L"),
- ("ń", "n"),
- ("ň", "n"),
- ("Ō", "O"),
- ("ō", "o"),
- ("ő", "o"),
- ("ř", "r"),
- ("Ś", "S"),
- ("ś", "s"),
- ("Ş", "S"),
- ("ş", "s"),
- ("Š", "S"),
- ("š", "s"),
- ("ū", "u"),
- ("ź", "z"),
- ("Ż", "Z"),
- ("Ž", "Z"),
- ("ǐ", "i"),
- ("ǐ", "i"),
- ("ș", "s"),
- ("ț", "t"),
-]
-
-
-title = "Text-to-Speech"
-description = """
-Demo for text-to-speech translation in German. Demo uses [Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german](https://huggingface.co/Sandiago21/speecht5_finetuned_mozilla_foundation_common_voice_13_german) checkpoint, which is based on Microsoft's
-[SpeechT5 TTS](https://huggingface.co/microsoft/speecht5_tts) model and is fine-tuned in German Audio dataset
-")
-"""
-
-
-def cleanup_text(text):
- for src, dst in replacements:
- text = text.replace(src, dst)
- return text
-
-def synthesize_speech(text):
- text = cleanup_text(text)
- inputs = processor(text=text, return_tensors="pt")
-
- speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
-
- return gr.Audio.update(value=(16000, speech.cpu().numpy()))
-
-syntesize_speech_gradio = gr.Interface(
- synthesize_speech,
- inputs = gr.Textbox(label="Text", placeholder="Type something here..."),
- outputs=gr.Audio(),
- examples=["Daher wird die Reform der Europäischen Sozialfondsverordnung, die wir morgen beschließen, auch umgehend in Kraft treten."],
- title=title,
- description=description,
-).launch()
diff --git a/spaces/Sapiensia/diffuse-the-rest/svelte.config.js b/spaces/Sapiensia/diffuse-the-rest/svelte.config.js
deleted file mode 100644
index 39e5f7c03b9e9e26cf8c88ff11a15a3bb45b1534..0000000000000000000000000000000000000000
--- a/spaces/Sapiensia/diffuse-the-rest/svelte.config.js
+++ /dev/null
@@ -1,22 +0,0 @@
-import { mdsvex } from 'mdsvex';
-import mdsvexConfig from './mdsvex.config.js';
-import adapter from '@sveltejs/adapter-static';
-import preprocess from 'svelte-preprocess';
-
-/** @type {import('@sveltejs/kit').Config} */
-const config = {
- extensions: ['.svelte', ...mdsvexConfig.extensions],
-
- // Consult https://github.com/sveltejs/svelte-preprocess
- // for more information about preprocessors
- preprocess: [preprocess(), mdsvex(mdsvexConfig)],
-
- kit: {
- adapter: adapter(),
- prerender: {
- default: true
- }
- }
-};
-
-export default config;
diff --git a/spaces/Sarath2002/Form_Understanding_using_LayoutLMV3/support.py b/spaces/Sarath2002/Form_Understanding_using_LayoutLMV3/support.py
deleted file mode 100644
index d3d1dad6224f605ed96190a4f8b498d772b7eb35..0000000000000000000000000000000000000000
--- a/spaces/Sarath2002/Form_Understanding_using_LayoutLMV3/support.py
+++ /dev/null
@@ -1,87 +0,0 @@
-from datasets import load_dataset
-import numpy as np
-from transformers import LayoutLMv3Processor, LayoutLMv3ForTokenClassification
-from datasets import load_dataset
-from PIL import Image, ImageDraw, ImageFont
-import torch
-
-
-
-tokenizer = LayoutLMv3Processor.from_pretrained("microsoft/layoutlmv3-base")
-model = LayoutLMv3ForTokenClassification.from_pretrained(r"models")
-"""device = torch.device("cuda")
-model.cuda()
-"""
-labels = ['O', 'B-HEADER', 'I-HEADER', 'B-QUESTION', 'I-QUESTION', 'B-ANSWER', 'I-ANSWER']
-id2label = {v: k for v, k in enumerate(labels)}
-label2color = {
- "question": "blue",
- "answer": "green",
- "header": "orange",
- "other": "violet",
-}
-
-
-def unnormalize_box(bbox, width, height):
- return [
- width * (bbox[0] / 1000),
- height * (bbox[1] / 1000),
- width * (bbox[2] / 1000),
- height * (bbox[3] / 1000),
- ]
-
-
-def iob_to_label(label):
- label = label[2:]
- if not label:
- return "other"
- return label
-
-
-def processor(image):
- image = image.convert("RGB")
- width, height = image.size
-
-
- # encode
- encoding = tokenizer(
- image, truncation=True, return_offsets_mapping=True, return_tensors="pt"
- )
- offset_mapping = encoding.pop("offset_mapping")
-
- encoding = encoding.to('cuda')
-
- # forward pass
- outputs = model(**encoding)
-
- # get predictions
- predictions = outputs.logits.argmax(-1).squeeze().tolist()
- token_boxes = encoding.bbox.squeeze().tolist()
-
-
- # only keep non-subword predictions
- is_subword = np.array(offset_mapping.squeeze().tolist())[:, 0] != 0
- true_predictions = [
- id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx]
- ]
- true_boxes = [
- unnormalize_box(box, width, height)
- for idx, box in enumerate(token_boxes)
- if not is_subword[idx]
- ]
-
-
-
- draw = ImageDraw.Draw(image)
- font = ImageFont.load_default()
- for prediction, box in zip(true_predictions, true_boxes):
- predicted_label = iob_to_label(prediction).lower()
- draw.rectangle(box, outline=label2color[predicted_label])
- draw.text(
- (box[0] + 10, box[1] - 10),
- text=predicted_label,
- fill=label2color[predicted_label],
- font=font,
- )
-
- return image
\ No newline at end of file
diff --git a/spaces/Sphila/Sphila-Diffusion/README.md b/spaces/Sphila/Sphila-Diffusion/README.md
deleted file mode 100644
index 01f316015f941676c1a03e25496e55f327c43103..0000000000000000000000000000000000000000
--- a/spaces/Sphila/Sphila-Diffusion/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Sphila-Diffusion
-emoji: 🔥
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.13.2
-app_file: app.py
-pinned: false
-license: openrail
----
diff --git a/spaces/SriniJalasuthram/SJ-06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md b/spaces/SriniJalasuthram/SJ-06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md
deleted file mode 100644
index c7f042e4c9c0f401731f009842a325e2d1386bf5..0000000000000000000000000000000000000000
--- a/spaces/SriniJalasuthram/SJ-06-SL-AI-Image-Music-Video-UI-UX-URL/Article.md
+++ /dev/null
@@ -1,51 +0,0 @@
-
-# Image Generation for Art, Marketing, Ideation, Design, and Use in Business
-
-A number of multiple AI pipeline element strategies have evolved on the open market which allow you to generate images using a combination of image prompts and word prompts. This brief analysis gives an idea of the prompting capabilities as well as image rendering techniques that are used in the strategy to generate art from human understanding of images and text used to describe a scene.
-
-First a top five list on state of the art generators both free and paid is worth consideration.
-
-1) Midjourney - a Discord server based chatboat AI that allows /imagine prompts which can generate multiple images at a time. This is best at parallel creation, high accuracy even photo real creations.
-2) Artbreeder - A multiple capability tool which now features a Collager which assists in starting image composition. By far the most innovative approach which does great to combine the right partial elements in a scene.
-3) Dreamstudio - A Huggingface derived art program in beta which uses stable diffusion to create highly accurate art and images.
-4) Nightcafe - A credit based creation AI app that can do generation of video dives into an AI art piece which can produce some of the best experiences in Video.
-5) RunwayML - a quintessential tool in processing morph audio and video tracks which rival most high end video edit tools.
-
-These 5 tools make up some of the best AI pipeline programs that are cloud based that allow anyone to begin easily building their portfolio of art.
-
-The prompting capabilities often involve having a set of text based prompts to get started. Most also feature a starter image which could be an example of what you would like to create.
-
-URL Links:
-1) Collager: https://www.artbreeder.com/beta/collage
-2) NightCafe: https://creator.nightcafe.studio/explore
-3) Midjourney: https://www.midjourney.com/app/users/779773261440614430/
-4) Dreamstudio: https://beta.dreamstudio.ai/dream
-5) RunwayML: https://app.runwayml.com/
-
-## Getting Started and Organizing Your AI Pipeline and Process
-
-Any great strategy has a number of steps that combine all capabilities at your disposal. It is useful to note how you can easily fir these together into a process that works for you.
-
-The techniques worth noted are listed below. Consider how you will use them will make your pipeline easier and more automated to allow you to spend the majority of your time curating what you have made, and ideating what you want to create next.
-
-1) Source materials: Since prompting requires text and text examples can quickly help you compose good input, its worth considering and documenting some effective prompts. Nightcafe with its integration into email, sends you a copy of your creation plus the prompting text so one option is to use your email account to keep a record of which prompts work for which outputs.
-2) Source materials: Discord since its a public chat format allows you to easily see what others are using for prompts in bulk. There are a number of chat channels designed for people new to the platform and often you can copy and paste if you see very effective prompts with material you are looking for.
-3) Source materials: Collager is unique in its ability to add additive parts and then dial in the percent of AI you would like with that. This allows you to add a few image elements which help start out your generation.
-4) Source materials: Since images and prompts are going to be your mainstay for inputs its worth considering an open standard for storing and retrieving these from anywhere. Github is a good place since markdown language can involve text in table or list format and includes a capability to reference uploaded images within markdown. This is also a good form for portability since you can later fork and download your repository with a few clicks from anywhere.
-5) Source materials: Google drive is integrated into the Artbreeder Collager workflow which allows you easily expand your work and even compose albums of the ones you like to place in Google photo albums. The portfolio you save on different sites have different degrees of ease when aggregating your collections. Collager for instance allows right click save for instant saving of your creation. Dreamstudio features a history. Midjourney features a profile site for you to store and review creations even triggering Upscales which important to use to get the highest resolution output for your creations.
-
-## Social Media integration
-
-Depending on your target "safe for work" exports of your work, it is sometimes important to know your accepted social media outlets that you can integrate. Cloud based interactions are the key to successful audiences if you want to scale and share your process with others.
-
-The key social media outlets supported for these tools are here in a sorted link list which start with public open source first:
-
-1) Github - Github is open at most companies and allow creation of a free space to share your content.
-2) LinkedIn - LinkedIn is acceptable use at nearly every company.
-3) Twitter - Twitter is supported as a social media outlet at most companies yet can also be used with security restrictions which might limit posting but allow read access.
-4) Facebook - Meta's Facebook is a good outlet since it allows creation of large folios of your images along with stories. This venue however is locked down at many organizations.
-5) Instagram - Instagram is supported as an output channel for many tools yet has decreased in popularity due to high frequency of ads and pay for likes models. While it can still be one of the best places for domain specific arrangements of images it is likely locked down in most secure organizations.
-6) Youtube - For video uploads with automated captioning and long term storage of short and long form video this is an essential for any creation you compose as video. It is also useful to review and compose playlists of videos here for yourself that speed up your learning - Spend some time at Youtube university and keep a record of keyword searches there sometimes along with your playlists to accelerate learning.
-7) Gmail - With the baility to move email in and out its useful to create and wrap up details within email. Most email policies come with a content limitation (for example no files larger than 25MB. For this reason get used to creating pproject wrap up archives with winzip or compression software. With the convenience of keyword searching you can usually use this as a base.
-8) Last a worth mention is Huggingface.com. Like github as you become more sophisticated in your public open source capabilities, HuggingFace can allow you to wrap up using one of three software development kits which are gadio, streamlit, and HTML5 each with unique AI and UI integration components and features. If you want to create your own AI pipelines this one also has the open source code and models ready to go to help you on your journey.
-
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_extension.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_extension.py
deleted file mode 100644
index 24ecf7e97e3e56ea51327cc4704ff1fa749c15aa..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_extension.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import os.path
-
-from tempfile import TemporaryDirectory
-
-import IPython.testing.tools as tt
-from IPython.utils.syspathcontext import prepended_to_syspath
-
-ext1_content = """
-def load_ipython_extension(ip):
- print("Running ext1 load")
-
-def unload_ipython_extension(ip):
- print("Running ext1 unload")
-"""
-
-ext2_content = """
-def load_ipython_extension(ip):
- print("Running ext2 load")
-"""
-
-ext3_content = """
-def load_ipython_extension(ip):
- ip2 = get_ipython()
- print(ip is ip2)
-"""
-
-def test_extension_loading():
- em = get_ipython().extension_manager
- with TemporaryDirectory() as td:
- ext1 = os.path.join(td, "ext1.py")
- with open(ext1, "w", encoding="utf-8") as f:
- f.write(ext1_content)
-
- ext2 = os.path.join(td, "ext2.py")
- with open(ext2, "w", encoding="utf-8") as f:
- f.write(ext2_content)
-
- with prepended_to_syspath(td):
- assert 'ext1' not in em.loaded
- assert 'ext2' not in em.loaded
-
- # Load extension
- with tt.AssertPrints("Running ext1 load"):
- assert em.load_extension('ext1') is None
- assert 'ext1' in em.loaded
-
- # Should refuse to load it again
- with tt.AssertNotPrints("Running ext1 load"):
- assert em.load_extension('ext1') == 'already loaded'
-
- # Reload
- with tt.AssertPrints("Running ext1 unload"):
- with tt.AssertPrints("Running ext1 load", suppress=False):
- em.reload_extension('ext1')
-
- # Unload
- with tt.AssertPrints("Running ext1 unload"):
- assert em.unload_extension('ext1') is None
-
- # Can't unload again
- with tt.AssertNotPrints("Running ext1 unload"):
- assert em.unload_extension('ext1') == 'not loaded'
- assert em.unload_extension('ext2') == 'not loaded'
-
- # Load extension 2
- with tt.AssertPrints("Running ext2 load"):
- assert em.load_extension('ext2') is None
-
- # Can't unload this
- assert em.unload_extension('ext2') == 'no unload function'
-
- # But can reload it
- with tt.AssertPrints("Running ext2 load"):
- em.reload_extension('ext2')
-
-
-def test_extension_builtins():
- em = get_ipython().extension_manager
- with TemporaryDirectory() as td:
- ext3 = os.path.join(td, "ext3.py")
- with open(ext3, "w", encoding="utf-8") as f:
- f.write(ext3_content)
-
- assert 'ext3' not in em.loaded
-
- with prepended_to_syspath(td):
- # Load extension
- with tt.AssertPrints("True"):
- assert em.load_extension('ext3') is None
- assert 'ext3' in em.loaded
-
-
-def test_non_extension():
- em = get_ipython().extension_manager
- assert em.load_extension("sys") == "no load function"
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/deepreload.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/deepreload.py
deleted file mode 100644
index aaedab24255eed6b0213970be6e786d38e1cf900..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/lib/deepreload.py
+++ /dev/null
@@ -1,310 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-Provides a reload() function that acts recursively.
-
-Python's normal :func:`python:reload` function only reloads the module that it's
-passed. The :func:`reload` function in this module also reloads everything
-imported from that module, which is useful when you're changing files deep
-inside a package.
-
-To use this as your default reload function, type this::
-
- import builtins
- from IPython.lib import deepreload
- builtins.reload = deepreload.reload
-
-A reference to the original :func:`python:reload` is stored in this module as
-:data:`original_reload`, so you can restore it later.
-
-This code is almost entirely based on knee.py, which is a Python
-re-implementation of hierarchical module import.
-"""
-#*****************************************************************************
-# Copyright (C) 2001 Nathaniel Gray
-#
-# Distributed under the terms of the BSD License. The full license is in
-# the file COPYING, distributed as part of this software.
-#*****************************************************************************
-
-import builtins as builtin_mod
-from contextlib import contextmanager
-import importlib
-import sys
-
-from types import ModuleType
-from warnings import warn
-import types
-
-original_import = builtin_mod.__import__
-
-@contextmanager
-def replace_import_hook(new_import):
- saved_import = builtin_mod.__import__
- builtin_mod.__import__ = new_import
- try:
- yield
- finally:
- builtin_mod.__import__ = saved_import
-
-def get_parent(globals, level):
- """
- parent, name = get_parent(globals, level)
-
- Return the package that an import is being performed in. If globals comes
- from the module foo.bar.bat (not itself a package), this returns the
- sys.modules entry for foo.bar. If globals is from a package's __init__.py,
- the package's entry in sys.modules is returned.
-
- If globals doesn't come from a package or a module in a package, or a
- corresponding entry is not found in sys.modules, None is returned.
- """
- orig_level = level
-
- if not level or not isinstance(globals, dict):
- return None, ''
-
- pkgname = globals.get('__package__', None)
-
- if pkgname is not None:
- # __package__ is set, so use it
- if not hasattr(pkgname, 'rindex'):
- raise ValueError('__package__ set to non-string')
- if len(pkgname) == 0:
- if level > 0:
- raise ValueError('Attempted relative import in non-package')
- return None, ''
- name = pkgname
- else:
- # __package__ not set, so figure it out and set it
- if '__name__' not in globals:
- return None, ''
- modname = globals['__name__']
-
- if '__path__' in globals:
- # __path__ is set, so modname is already the package name
- globals['__package__'] = name = modname
- else:
- # Normal module, so work out the package name if any
- lastdot = modname.rfind('.')
- if lastdot < 0 < level:
- raise ValueError("Attempted relative import in non-package")
- if lastdot < 0:
- globals['__package__'] = None
- return None, ''
- globals['__package__'] = name = modname[:lastdot]
-
- dot = len(name)
- for x in range(level, 1, -1):
- try:
- dot = name.rindex('.', 0, dot)
- except ValueError as e:
- raise ValueError("attempted relative import beyond top-level "
- "package") from e
- name = name[:dot]
-
- try:
- parent = sys.modules[name]
- except BaseException as e:
- if orig_level < 1:
- warn("Parent module '%.200s' not found while handling absolute "
- "import" % name)
- parent = None
- else:
- raise SystemError("Parent module '%.200s' not loaded, cannot "
- "perform relative import" % name) from e
-
- # We expect, but can't guarantee, if parent != None, that:
- # - parent.__name__ == name
- # - parent.__dict__ is globals
- # If this is violated... Who cares?
- return parent, name
-
-def load_next(mod, altmod, name, buf):
- """
- mod, name, buf = load_next(mod, altmod, name, buf)
-
- altmod is either None or same as mod
- """
-
- if len(name) == 0:
- # completely empty module name should only happen in
- # 'from . import' (or '__import__("")')
- return mod, None, buf
-
- dot = name.find('.')
- if dot == 0:
- raise ValueError('Empty module name')
-
- if dot < 0:
- subname = name
- next = None
- else:
- subname = name[:dot]
- next = name[dot+1:]
-
- if buf != '':
- buf += '.'
- buf += subname
-
- result = import_submodule(mod, subname, buf)
- if result is None and mod != altmod:
- result = import_submodule(altmod, subname, subname)
- if result is not None:
- buf = subname
-
- if result is None:
- raise ImportError("No module named %.200s" % name)
-
- return result, next, buf
-
-
-# Need to keep track of what we've already reloaded to prevent cyclic evil
-found_now = {}
-
-def import_submodule(mod, subname, fullname):
- """m = import_submodule(mod, subname, fullname)"""
- # Require:
- # if mod == None: subname == fullname
- # else: mod.__name__ + "." + subname == fullname
-
- global found_now
- if fullname in found_now and fullname in sys.modules:
- m = sys.modules[fullname]
- else:
- print('Reloading', fullname)
- found_now[fullname] = 1
- oldm = sys.modules.get(fullname, None)
- try:
- if oldm is not None:
- m = importlib.reload(oldm)
- else:
- m = importlib.import_module(subname, mod)
- except:
- # load_module probably removed name from modules because of
- # the error. Put back the original module object.
- if oldm:
- sys.modules[fullname] = oldm
- raise
-
- add_submodule(mod, m, fullname, subname)
-
- return m
-
-def add_submodule(mod, submod, fullname, subname):
- """mod.{subname} = submod"""
- if mod is None:
- return #Nothing to do here.
-
- if submod is None:
- submod = sys.modules[fullname]
-
- setattr(mod, subname, submod)
-
- return
-
-def ensure_fromlist(mod, fromlist, buf, recursive):
- """Handle 'from module import a, b, c' imports."""
- if not hasattr(mod, '__path__'):
- return
- for item in fromlist:
- if not hasattr(item, 'rindex'):
- raise TypeError("Item in ``from list'' not a string")
- if item == '*':
- if recursive:
- continue # avoid endless recursion
- try:
- all = mod.__all__
- except AttributeError:
- pass
- else:
- ret = ensure_fromlist(mod, all, buf, 1)
- if not ret:
- return 0
- elif not hasattr(mod, item):
- import_submodule(mod, item, buf + '.' + item)
-
-def deep_import_hook(name, globals=None, locals=None, fromlist=None, level=-1):
- """Replacement for __import__()"""
- parent, buf = get_parent(globals, level)
-
- head, name, buf = load_next(parent, None if level < 0 else parent, name, buf)
-
- tail = head
- while name:
- tail, name, buf = load_next(tail, tail, name, buf)
-
- # If tail is None, both get_parent and load_next found
- # an empty module name: someone called __import__("") or
- # doctored faulty bytecode
- if tail is None:
- raise ValueError('Empty module name')
-
- if not fromlist:
- return head
-
- ensure_fromlist(tail, fromlist, buf, 0)
- return tail
-
-modules_reloading = {}
-
-def deep_reload_hook(m):
- """Replacement for reload()."""
- # Hardcode this one as it would raise a NotImplementedError from the
- # bowels of Python and screw up the import machinery after.
- # unlike other imports the `exclude` list already in place is not enough.
-
- if m is types:
- return m
- if not isinstance(m, ModuleType):
- raise TypeError("reload() argument must be module")
-
- name = m.__name__
-
- if name not in sys.modules:
- raise ImportError("reload(): module %.200s not in sys.modules" % name)
-
- global modules_reloading
- try:
- return modules_reloading[name]
- except:
- modules_reloading[name] = m
-
- try:
- newm = importlib.reload(m)
- except:
- sys.modules[name] = m
- raise
- finally:
- modules_reloading.clear()
- return newm
-
-# Save the original hooks
-original_reload = importlib.reload
-
-# Replacement for reload()
-def reload(
- module,
- exclude=(
- *sys.builtin_module_names,
- "sys",
- "os.path",
- "builtins",
- "__main__",
- "numpy",
- "numpy._globals",
- ),
-):
- """Recursively reload all modules used in the given module. Optionally
- takes a list of modules to exclude from reloading. The default exclude
- list contains modules listed in sys.builtin_module_names with additional
- sys, os.path, builtins and __main__, to prevent, e.g., resetting
- display, exception, and io hooks.
- """
- global found_now
- for i in exclude:
- found_now[i] = 1
- try:
- with replace_import_hook(deep_import_hook):
- return deep_reload_hook(module)
- finally:
- found_now = {}
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/sphinxext/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/sphinxext/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/base_doc/io/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/base_doc/io/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Supedsa/rvc-models/lib/infer_pack/modules.py b/spaces/Supedsa/rvc-models/lib/infer_pack/modules.py
deleted file mode 100644
index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000
--- a/spaces/Supedsa/rvc-models/lib/infer_pack/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from lib.infer_pack.transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/Supedsa/rvc-models/lib/infer_pack/onnx_inference.py b/spaces/Supedsa/rvc-models/lib/infer_pack/onnx_inference.py
deleted file mode 100644
index 6517853be49e61c427cf7cd9b5ed203f6d5f367e..0000000000000000000000000000000000000000
--- a/spaces/Supedsa/rvc-models/lib/infer_pack/onnx_inference.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import onnxruntime
-import librosa
-import numpy as np
-import soundfile
-
-
-class ContentVec:
- def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None):
- print("load model(s) from {}".format(vec_path))
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- elif device == "dml":
- providers = ["DmlExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
-
- def __call__(self, wav):
- return self.forward(wav)
-
- def forward(self, wav):
- feats = wav
- if feats.ndim == 2: # double channels
- feats = feats.mean(-1)
- assert feats.ndim == 1, feats.ndim
- feats = np.expand_dims(np.expand_dims(feats, 0), 0)
- onnx_input = {self.model.get_inputs()[0].name: feats}
- logits = self.model.run(None, onnx_input)[0]
- return logits.transpose(0, 2, 1)
-
-
-def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs):
- if f0_predictor == "pm":
- from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor
-
- f0_predictor_object = PMF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "harvest":
- from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import (
- HarvestF0Predictor,
- )
-
- f0_predictor_object = HarvestF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "dio":
- from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor
-
- f0_predictor_object = DioF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- else:
- raise Exception("Unknown f0 predictor")
- return f0_predictor_object
-
-
-class OnnxRVC:
- def __init__(
- self,
- model_path,
- sr=40000,
- hop_size=512,
- vec_path="vec-768-layer-12",
- device="cpu",
- ):
- vec_path = f"pretrained/{vec_path}.onnx"
- self.vec_model = ContentVec(vec_path, device)
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- elif device == "dml":
- providers = ["DmlExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(model_path, providers=providers)
- self.sampling_rate = sr
- self.hop_size = hop_size
-
- def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd):
- onnx_input = {
- self.model.get_inputs()[0].name: hubert,
- self.model.get_inputs()[1].name: hubert_length,
- self.model.get_inputs()[2].name: pitch,
- self.model.get_inputs()[3].name: pitchf,
- self.model.get_inputs()[4].name: ds,
- self.model.get_inputs()[5].name: rnd,
- }
- return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16)
-
- def inference(
- self,
- raw_path,
- sid,
- f0_method="dio",
- f0_up_key=0,
- pad_time=0.5,
- cr_threshold=0.02,
- ):
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0_predictor = get_f0_predictor(
- f0_method,
- hop_length=self.hop_size,
- sampling_rate=self.sampling_rate,
- threshold=cr_threshold,
- )
- wav, sr = librosa.load(raw_path, sr=self.sampling_rate)
- org_length = len(wav)
- if org_length / sr > 50.0:
- raise RuntimeError("Reached Max Length")
-
- wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000)
- wav16k = wav16k
-
- hubert = self.vec_model(wav16k)
- hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32)
- hubert_length = hubert.shape[1]
-
- pitchf = f0_predictor.compute_f0(wav, hubert_length)
- pitchf = pitchf * 2 ** (f0_up_key / 12)
- pitch = pitchf.copy()
- f0_mel = 1127 * np.log(1 + pitch / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- pitch = np.rint(f0_mel).astype(np.int64)
-
- pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32)
- pitch = pitch.reshape(1, len(pitch))
- ds = np.array([sid]).astype(np.int64)
-
- rnd = np.random.randn(1, 192, hubert_length).astype(np.float32)
- hubert_length = np.array([hubert_length]).astype(np.int64)
-
- out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze()
- out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant")
- return out_wav[0:org_length]
diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/config/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/config/__init__.py
deleted file mode 100644
index a78ed118685fcfd869f7a72caf6b94621530196a..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/config/__init__.py
+++ /dev/null
@@ -1,24 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .compat import downgrade_config, upgrade_config
-from .config import CfgNode, get_cfg, global_cfg, set_global_cfg, configurable
-from .instantiate import instantiate
-from .lazy import LazyCall, LazyConfig
-
-__all__ = [
- "CfgNode",
- "get_cfg",
- "global_cfg",
- "set_global_cfg",
- "downgrade_config",
- "upgrade_config",
- "configurable",
- "instantiate",
- "LazyCall",
- "LazyConfig",
-]
-
-
-from annotator.oneformer.detectron2.utils.env import fixup_module_metadata
-
-fixup_module_metadata(__name__, globals(), __all__)
-del fixup_module_metadata
diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/__init__.py
deleted file mode 100644
index dcd88ff0c09d630577e3ac9f8afb5324a80a7be4..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .build_solver import build_lr_scheduler
-from .config import add_deeplab_config
-from .resnet import build_resnet_deeplab_backbone
-from .semantic_seg import DeepLabV3Head, DeepLabV3PlusHead
diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/tracing.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/tracing.py
deleted file mode 100644
index 75661131505cee2eecd0b1c9dabcd4d7bd5453b2..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/utils/tracing.py
+++ /dev/null
@@ -1,71 +0,0 @@
-import inspect
-import torch
-
-from annotator.oneformer.detectron2.utils.env import TORCH_VERSION
-
-try:
- from torch.fx._symbolic_trace import is_fx_tracing as is_fx_tracing_current
-
- tracing_current_exists = True
-except ImportError:
- tracing_current_exists = False
-
-try:
- from torch.fx._symbolic_trace import _orig_module_call
-
- tracing_legacy_exists = True
-except ImportError:
- tracing_legacy_exists = False
-
-
-@torch.jit.ignore
-def is_fx_tracing_legacy() -> bool:
- """
- Returns a bool indicating whether torch.fx is currently symbolically tracing a module.
- Can be useful for gating module logic that is incompatible with symbolic tracing.
- """
- return torch.nn.Module.__call__ is not _orig_module_call
-
-
-@torch.jit.ignore
-def is_fx_tracing() -> bool:
- """Returns whether execution is currently in
- Torch FX tracing mode"""
- if TORCH_VERSION >= (1, 10) and tracing_current_exists:
- return is_fx_tracing_current()
- elif tracing_legacy_exists:
- return is_fx_tracing_legacy()
- else:
- # Can't find either current or legacy tracing indication code.
- # Enabling this assert_fx_safe() call regardless of tracing status.
- return False
-
-
-@torch.jit.ignore
-def assert_fx_safe(condition: bool, message: str) -> torch.Tensor:
- """An FX-tracing safe version of assert.
- Avoids erroneous type assertion triggering when types are masked inside
- an fx.proxy.Proxy object during tracing.
- Args: condition - either a boolean expression or a string representing
- the condition to test. If this assert triggers an exception when tracing
- due to dynamic control flow, try encasing the expression in quotation
- marks and supplying it as a string."""
- # Must return a concrete tensor for compatibility with PyTorch <=1.8.
- # If <=1.8 compatibility is not needed, return type can be converted to None
- if not is_fx_tracing():
- try:
- if isinstance(condition, str):
- caller_frame = inspect.currentframe().f_back
- torch._assert(
- eval(condition, caller_frame.f_globals, caller_frame.f_locals), message
- )
- return torch.ones(1)
- else:
- torch._assert(condition, message)
- return torch.ones(1)
- except torch.fx.proxy.TraceError as e:
- print(
- "Found a non-FX compatible assertion. Skipping the check. Failure is shown below"
- + str(e)
- )
- return torch.zeros(1)
diff --git a/spaces/TNR-5/chatorO/README.md b/spaces/TNR-5/chatorO/README.md
deleted file mode 100644
index fe30d12eee133ebb5031967d895aa10751fe2265..0000000000000000000000000000000000000000
--- a/spaces/TNR-5/chatorO/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: gpt4f
-emoji: ♾️💬
-colorFrom: indigo
-colorTo: yellow
-sdk: docker
-pinned: false
-duplicated_from: rishi1985/gpt4f-4
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/more_itertools/recipes.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/more_itertools/recipes.py
deleted file mode 100644
index 521abd7c2ca633f90a5ba13a8060c5c3d0c32205..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/more_itertools/recipes.py
+++ /dev/null
@@ -1,620 +0,0 @@
-"""Imported from the recipes section of the itertools documentation.
-
-All functions taken from the recipes section of the itertools library docs
-[1]_.
-Some backward-compatible usability improvements have been made.
-
-.. [1] http://docs.python.org/library/itertools.html#recipes
-
-"""
-import warnings
-from collections import deque
-from itertools import (
- chain,
- combinations,
- count,
- cycle,
- groupby,
- islice,
- repeat,
- starmap,
- tee,
- zip_longest,
-)
-import operator
-from random import randrange, sample, choice
-
-__all__ = [
- 'all_equal',
- 'consume',
- 'convolve',
- 'dotproduct',
- 'first_true',
- 'flatten',
- 'grouper',
- 'iter_except',
- 'ncycles',
- 'nth',
- 'nth_combination',
- 'padnone',
- 'pad_none',
- 'pairwise',
- 'partition',
- 'powerset',
- 'prepend',
- 'quantify',
- 'random_combination_with_replacement',
- 'random_combination',
- 'random_permutation',
- 'random_product',
- 'repeatfunc',
- 'roundrobin',
- 'tabulate',
- 'tail',
- 'take',
- 'unique_everseen',
- 'unique_justseen',
-]
-
-
-def take(n, iterable):
- """Return first *n* items of the iterable as a list.
-
- >>> take(3, range(10))
- [0, 1, 2]
-
- If there are fewer than *n* items in the iterable, all of them are
- returned.
-
- >>> take(10, range(3))
- [0, 1, 2]
-
- """
- return list(islice(iterable, n))
-
-
-def tabulate(function, start=0):
- """Return an iterator over the results of ``func(start)``,
- ``func(start + 1)``, ``func(start + 2)``...
-
- *func* should be a function that accepts one integer argument.
-
- If *start* is not specified it defaults to 0. It will be incremented each
- time the iterator is advanced.
-
- >>> square = lambda x: x ** 2
- >>> iterator = tabulate(square, -3)
- >>> take(4, iterator)
- [9, 4, 1, 0]
-
- """
- return map(function, count(start))
-
-
-def tail(n, iterable):
- """Return an iterator over the last *n* items of *iterable*.
-
- >>> t = tail(3, 'ABCDEFG')
- >>> list(t)
- ['E', 'F', 'G']
-
- """
- return iter(deque(iterable, maxlen=n))
-
-
-def consume(iterator, n=None):
- """Advance *iterable* by *n* steps. If *n* is ``None``, consume it
- entirely.
-
- Efficiently exhausts an iterator without returning values. Defaults to
- consuming the whole iterator, but an optional second argument may be
- provided to limit consumption.
-
- >>> i = (x for x in range(10))
- >>> next(i)
- 0
- >>> consume(i, 3)
- >>> next(i)
- 4
- >>> consume(i)
- >>> next(i)
- Traceback (most recent call last):
- File "", line 1, in
- StopIteration
-
- If the iterator has fewer items remaining than the provided limit, the
- whole iterator will be consumed.
-
- >>> i = (x for x in range(3))
- >>> consume(i, 5)
- >>> next(i)
- Traceback (most recent call last):
- File "", line 1, in
- StopIteration
-
- """
- # Use functions that consume iterators at C speed.
- if n is None:
- # feed the entire iterator into a zero-length deque
- deque(iterator, maxlen=0)
- else:
- # advance to the empty slice starting at position n
- next(islice(iterator, n, n), None)
-
-
-def nth(iterable, n, default=None):
- """Returns the nth item or a default value.
-
- >>> l = range(10)
- >>> nth(l, 3)
- 3
- >>> nth(l, 20, "zebra")
- 'zebra'
-
- """
- return next(islice(iterable, n, None), default)
-
-
-def all_equal(iterable):
- """
- Returns ``True`` if all the elements are equal to each other.
-
- >>> all_equal('aaaa')
- True
- >>> all_equal('aaab')
- False
-
- """
- g = groupby(iterable)
- return next(g, True) and not next(g, False)
-
-
-def quantify(iterable, pred=bool):
- """Return the how many times the predicate is true.
-
- >>> quantify([True, False, True])
- 2
-
- """
- return sum(map(pred, iterable))
-
-
-def pad_none(iterable):
- """Returns the sequence of elements and then returns ``None`` indefinitely.
-
- >>> take(5, pad_none(range(3)))
- [0, 1, 2, None, None]
-
- Useful for emulating the behavior of the built-in :func:`map` function.
-
- See also :func:`padded`.
-
- """
- return chain(iterable, repeat(None))
-
-
-padnone = pad_none
-
-
-def ncycles(iterable, n):
- """Returns the sequence elements *n* times
-
- >>> list(ncycles(["a", "b"], 3))
- ['a', 'b', 'a', 'b', 'a', 'b']
-
- """
- return chain.from_iterable(repeat(tuple(iterable), n))
-
-
-def dotproduct(vec1, vec2):
- """Returns the dot product of the two iterables.
-
- >>> dotproduct([10, 10], [20, 20])
- 400
-
- """
- return sum(map(operator.mul, vec1, vec2))
-
-
-def flatten(listOfLists):
- """Return an iterator flattening one level of nesting in a list of lists.
-
- >>> list(flatten([[0, 1], [2, 3]]))
- [0, 1, 2, 3]
-
- See also :func:`collapse`, which can flatten multiple levels of nesting.
-
- """
- return chain.from_iterable(listOfLists)
-
-
-def repeatfunc(func, times=None, *args):
- """Call *func* with *args* repeatedly, returning an iterable over the
- results.
-
- If *times* is specified, the iterable will terminate after that many
- repetitions:
-
- >>> from operator import add
- >>> times = 4
- >>> args = 3, 5
- >>> list(repeatfunc(add, times, *args))
- [8, 8, 8, 8]
-
- If *times* is ``None`` the iterable will not terminate:
-
- >>> from random import randrange
- >>> times = None
- >>> args = 1, 11
- >>> take(6, repeatfunc(randrange, times, *args)) # doctest:+SKIP
- [2, 4, 8, 1, 8, 4]
-
- """
- if times is None:
- return starmap(func, repeat(args))
- return starmap(func, repeat(args, times))
-
-
-def _pairwise(iterable):
- """Returns an iterator of paired items, overlapping, from the original
-
- >>> take(4, pairwise(count()))
- [(0, 1), (1, 2), (2, 3), (3, 4)]
-
- On Python 3.10 and above, this is an alias for :func:`itertools.pairwise`.
-
- """
- a, b = tee(iterable)
- next(b, None)
- yield from zip(a, b)
-
-
-try:
- from itertools import pairwise as itertools_pairwise
-except ImportError:
- pairwise = _pairwise
-else:
-
- def pairwise(iterable):
- yield from itertools_pairwise(iterable)
-
- pairwise.__doc__ = _pairwise.__doc__
-
-
-def grouper(iterable, n, fillvalue=None):
- """Collect data into fixed-length chunks or blocks.
-
- >>> list(grouper('ABCDEFG', 3, 'x'))
- [('A', 'B', 'C'), ('D', 'E', 'F'), ('G', 'x', 'x')]
-
- """
- if isinstance(iterable, int):
- warnings.warn(
- "grouper expects iterable as first parameter", DeprecationWarning
- )
- n, iterable = iterable, n
- args = [iter(iterable)] * n
- return zip_longest(fillvalue=fillvalue, *args)
-
-
-def roundrobin(*iterables):
- """Yields an item from each iterable, alternating between them.
-
- >>> list(roundrobin('ABC', 'D', 'EF'))
- ['A', 'D', 'E', 'B', 'F', 'C']
-
- This function produces the same output as :func:`interleave_longest`, but
- may perform better for some inputs (in particular when the number of
- iterables is small).
-
- """
- # Recipe credited to George Sakkis
- pending = len(iterables)
- nexts = cycle(iter(it).__next__ for it in iterables)
- while pending:
- try:
- for next in nexts:
- yield next()
- except StopIteration:
- pending -= 1
- nexts = cycle(islice(nexts, pending))
-
-
-def partition(pred, iterable):
- """
- Returns a 2-tuple of iterables derived from the input iterable.
- The first yields the items that have ``pred(item) == False``.
- The second yields the items that have ``pred(item) == True``.
-
- >>> is_odd = lambda x: x % 2 != 0
- >>> iterable = range(10)
- >>> even_items, odd_items = partition(is_odd, iterable)
- >>> list(even_items), list(odd_items)
- ([0, 2, 4, 6, 8], [1, 3, 5, 7, 9])
-
- If *pred* is None, :func:`bool` is used.
-
- >>> iterable = [0, 1, False, True, '', ' ']
- >>> false_items, true_items = partition(None, iterable)
- >>> list(false_items), list(true_items)
- ([0, False, ''], [1, True, ' '])
-
- """
- if pred is None:
- pred = bool
-
- evaluations = ((pred(x), x) for x in iterable)
- t1, t2 = tee(evaluations)
- return (
- (x for (cond, x) in t1 if not cond),
- (x for (cond, x) in t2 if cond),
- )
-
-
-def powerset(iterable):
- """Yields all possible subsets of the iterable.
-
- >>> list(powerset([1, 2, 3]))
- [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)]
-
- :func:`powerset` will operate on iterables that aren't :class:`set`
- instances, so repeated elements in the input will produce repeated elements
- in the output. Use :func:`unique_everseen` on the input to avoid generating
- duplicates:
-
- >>> seq = [1, 1, 0]
- >>> list(powerset(seq))
- [(), (1,), (1,), (0,), (1, 1), (1, 0), (1, 0), (1, 1, 0)]
- >>> from more_itertools import unique_everseen
- >>> list(powerset(unique_everseen(seq)))
- [(), (1,), (0,), (1, 0)]
-
- """
- s = list(iterable)
- return chain.from_iterable(combinations(s, r) for r in range(len(s) + 1))
-
-
-def unique_everseen(iterable, key=None):
- """
- Yield unique elements, preserving order.
-
- >>> list(unique_everseen('AAAABBBCCDAABBB'))
- ['A', 'B', 'C', 'D']
- >>> list(unique_everseen('ABBCcAD', str.lower))
- ['A', 'B', 'C', 'D']
-
- Sequences with a mix of hashable and unhashable items can be used.
- The function will be slower (i.e., `O(n^2)`) for unhashable items.
-
- Remember that ``list`` objects are unhashable - you can use the *key*
- parameter to transform the list to a tuple (which is hashable) to
- avoid a slowdown.
-
- >>> iterable = ([1, 2], [2, 3], [1, 2])
- >>> list(unique_everseen(iterable)) # Slow
- [[1, 2], [2, 3]]
- >>> list(unique_everseen(iterable, key=tuple)) # Faster
- [[1, 2], [2, 3]]
-
- Similary, you may want to convert unhashable ``set`` objects with
- ``key=frozenset``. For ``dict`` objects,
- ``key=lambda x: frozenset(x.items())`` can be used.
-
- """
- seenset = set()
- seenset_add = seenset.add
- seenlist = []
- seenlist_add = seenlist.append
- use_key = key is not None
-
- for element in iterable:
- k = key(element) if use_key else element
- try:
- if k not in seenset:
- seenset_add(k)
- yield element
- except TypeError:
- if k not in seenlist:
- seenlist_add(k)
- yield element
-
-
-def unique_justseen(iterable, key=None):
- """Yields elements in order, ignoring serial duplicates
-
- >>> list(unique_justseen('AAAABBBCCDAABBB'))
- ['A', 'B', 'C', 'D', 'A', 'B']
- >>> list(unique_justseen('ABBCcAD', str.lower))
- ['A', 'B', 'C', 'A', 'D']
-
- """
- return map(next, map(operator.itemgetter(1), groupby(iterable, key)))
-
-
-def iter_except(func, exception, first=None):
- """Yields results from a function repeatedly until an exception is raised.
-
- Converts a call-until-exception interface to an iterator interface.
- Like ``iter(func, sentinel)``, but uses an exception instead of a sentinel
- to end the loop.
-
- >>> l = [0, 1, 2]
- >>> list(iter_except(l.pop, IndexError))
- [2, 1, 0]
-
- """
- try:
- if first is not None:
- yield first()
- while 1:
- yield func()
- except exception:
- pass
-
-
-def first_true(iterable, default=None, pred=None):
- """
- Returns the first true value in the iterable.
-
- If no true value is found, returns *default*
-
- If *pred* is not None, returns the first item for which
- ``pred(item) == True`` .
-
- >>> first_true(range(10))
- 1
- >>> first_true(range(10), pred=lambda x: x > 5)
- 6
- >>> first_true(range(10), default='missing', pred=lambda x: x > 9)
- 'missing'
-
- """
- return next(filter(pred, iterable), default)
-
-
-def random_product(*args, repeat=1):
- """Draw an item at random from each of the input iterables.
-
- >>> random_product('abc', range(4), 'XYZ') # doctest:+SKIP
- ('c', 3, 'Z')
-
- If *repeat* is provided as a keyword argument, that many items will be
- drawn from each iterable.
-
- >>> random_product('abcd', range(4), repeat=2) # doctest:+SKIP
- ('a', 2, 'd', 3)
-
- This equivalent to taking a random selection from
- ``itertools.product(*args, **kwarg)``.
-
- """
- pools = [tuple(pool) for pool in args] * repeat
- return tuple(choice(pool) for pool in pools)
-
-
-def random_permutation(iterable, r=None):
- """Return a random *r* length permutation of the elements in *iterable*.
-
- If *r* is not specified or is ``None``, then *r* defaults to the length of
- *iterable*.
-
- >>> random_permutation(range(5)) # doctest:+SKIP
- (3, 4, 0, 1, 2)
-
- This equivalent to taking a random selection from
- ``itertools.permutations(iterable, r)``.
-
- """
- pool = tuple(iterable)
- r = len(pool) if r is None else r
- return tuple(sample(pool, r))
-
-
-def random_combination(iterable, r):
- """Return a random *r* length subsequence of the elements in *iterable*.
-
- >>> random_combination(range(5), 3) # doctest:+SKIP
- (2, 3, 4)
-
- This equivalent to taking a random selection from
- ``itertools.combinations(iterable, r)``.
-
- """
- pool = tuple(iterable)
- n = len(pool)
- indices = sorted(sample(range(n), r))
- return tuple(pool[i] for i in indices)
-
-
-def random_combination_with_replacement(iterable, r):
- """Return a random *r* length subsequence of elements in *iterable*,
- allowing individual elements to be repeated.
-
- >>> random_combination_with_replacement(range(3), 5) # doctest:+SKIP
- (0, 0, 1, 2, 2)
-
- This equivalent to taking a random selection from
- ``itertools.combinations_with_replacement(iterable, r)``.
-
- """
- pool = tuple(iterable)
- n = len(pool)
- indices = sorted(randrange(n) for i in range(r))
- return tuple(pool[i] for i in indices)
-
-
-def nth_combination(iterable, r, index):
- """Equivalent to ``list(combinations(iterable, r))[index]``.
-
- The subsequences of *iterable* that are of length *r* can be ordered
- lexicographically. :func:`nth_combination` computes the subsequence at
- sort position *index* directly, without computing the previous
- subsequences.
-
- >>> nth_combination(range(5), 3, 5)
- (0, 3, 4)
-
- ``ValueError`` will be raised If *r* is negative or greater than the length
- of *iterable*.
- ``IndexError`` will be raised if the given *index* is invalid.
- """
- pool = tuple(iterable)
- n = len(pool)
- if (r < 0) or (r > n):
- raise ValueError
-
- c = 1
- k = min(r, n - r)
- for i in range(1, k + 1):
- c = c * (n - k + i) // i
-
- if index < 0:
- index += c
-
- if (index < 0) or (index >= c):
- raise IndexError
-
- result = []
- while r:
- c, n, r = c * r // n, n - 1, r - 1
- while index >= c:
- index -= c
- c, n = c * (n - r) // n, n - 1
- result.append(pool[-1 - n])
-
- return tuple(result)
-
-
-def prepend(value, iterator):
- """Yield *value*, followed by the elements in *iterator*.
-
- >>> value = '0'
- >>> iterator = ['1', '2', '3']
- >>> list(prepend(value, iterator))
- ['0', '1', '2', '3']
-
- To prepend multiple values, see :func:`itertools.chain`
- or :func:`value_chain`.
-
- """
- return chain([value], iterator)
-
-
-def convolve(signal, kernel):
- """Convolve the iterable *signal* with the iterable *kernel*.
-
- >>> signal = (1, 2, 3, 4, 5)
- >>> kernel = [3, 2, 1]
- >>> list(convolve(signal, kernel))
- [3, 8, 14, 20, 26, 14, 5]
-
- Note: the input arguments are not interchangeable, as the *kernel*
- is immediately consumed and stored.
-
- """
- kernel = tuple(kernel)[::-1]
- n = len(kernel)
- window = deque([0], maxlen=n) * n
- for x in chain(signal, repeat(0, n - 1)):
- window.append(x)
- yield sum(map(operator.mul, kernel, window))
diff --git a/spaces/TencentARC/T2I-Adapter-SDXL/assets/README.md b/spaces/TencentARC/T2I-Adapter-SDXL/assets/README.md
deleted file mode 100644
index 7400bb40f776b376192c18c43b6f910d29751648..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/T2I-Adapter-SDXL/assets/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
-These images were from the following URL:
-
-- https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg
-- https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_sketch.png
-- https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_lin.jpg
-- https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_mid.jpg
-- https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_zeo.jpg
-- https://huggingface.co/Adapter/t2iadapter/resolve/main/people.jpg
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/serialize.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/serialize.py
deleted file mode 100644
index 0b38862804b70cf1159a9bc93acdef73c184d883..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/serialize.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import cloudpickle
-
-
-class PicklableWrapper(object):
- """
- Wrap an object to make it more picklable, note that it uses
- heavy weight serialization libraries that are slower than pickle.
- It's best to use it only on closures (which are usually not picklable).
-
- This is a simplified version of
- https://github.com/joblib/joblib/blob/master/joblib/externals/loky/cloudpickle_wrapper.py
- """
-
- def __init__(self, obj):
- while isinstance(obj, PicklableWrapper):
- # Wrapping an object twice is no-op
- obj = obj._obj
- self._obj = obj
-
- def __reduce__(self):
- s = cloudpickle.dumps(self._obj)
- return cloudpickle.loads, (s,)
-
- def __call__(self, *args, **kwargs):
- return self._obj(*args, **kwargs)
-
- def __getattr__(self, attr):
- # Ensure that the wrapped object can be used seamlessly as the previous object.
- if attr not in ["_obj"]:
- return getattr(self._obj, attr)
- return getattr(self, attr)
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/fed_loss.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/fed_loss.py
deleted file mode 100644
index 290f0f07204e78ef2c4ff918aa500b04330279e6..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/fed_loss.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import torch
-import json
-import numpy as np
-from torch.nn import functional as F
-
-def load_class_freq(
- path='datasets/lvis/lvis_v1_train_cat_info.json',
- freq_weight=0.5):
- cat_info = json.load(open(path, 'r'))
- cat_info = torch.tensor(
- [c['image_count'] for c in sorted(cat_info, key=lambda x: x['id'])])
- freq_weight = cat_info.float() ** freq_weight
- return freq_weight
-
-def get_fed_loss_inds(
- gt_classes, num_sample_cats=50, C=1203, \
- weight=None, fed_cls_inds=-1):
- appeared = torch.unique(gt_classes) # C'
- prob = appeared.new_ones(C + 1).float()
- prob[-1] = 0
- if len(appeared) < num_sample_cats:
- if weight is not None:
- prob[:C] = weight.float().clone()
- prob[appeared] = 0
- if fed_cls_inds > 0:
- prob[fed_cls_inds:] = 0
- more_appeared = torch.multinomial(
- prob, num_sample_cats - len(appeared),
- replacement=False)
- appeared = torch.cat([appeared, more_appeared])
- return appeared
\ No newline at end of file
diff --git a/spaces/Thaweewat/ControlNet-Architecture/app.py b/spaces/Thaweewat/ControlNet-Architecture/app.py
deleted file mode 100644
index b6206efea8fcd1556b7ee907e13cd1b14694d6d8..0000000000000000000000000000000000000000
--- a/spaces/Thaweewat/ControlNet-Architecture/app.py
+++ /dev/null
@@ -1,119 +0,0 @@
-import cv2
-import einops
-import gradio as gr
-import numpy as np
-import torch
-
-from pytorch_lightning import seed_everything
-from util import resize_image, HWC3, apply_canny
-from ldm.models.diffusion.ddim import DDIMSampler
-from annotator.openpose import apply_openpose
-from cldm.model import create_model, load_state_dict
-from huggingface_hub import hf_hub_url, cached_download
-
-
-REPO_ID = "lllyasviel/ControlNet"
-scribble_checkpoint = "models/control_sd15_scribble.pth"
-scribble_model = create_model('./models/cldm_v15.yaml').cpu()
-scribble_model.load_state_dict(load_state_dict(cached_download(
- hf_hub_url(REPO_ID, scribble_checkpoint)
-), location='cpu'))
-scribble_model = scribble_model.cuda()
-ddim_sampler_scribble = DDIMSampler(scribble_model)
-save_memory = False
-
-def process(input_image, prompt, input_control, num_samples, image_resolution, ddim_steps, scale, seed, eta, low_threshold, high_threshold):
- # TODO: Clean Function for single Task
-
- if input_control == "Scribble":
- return process_scribble(input_image, prompt, num_samples, image_resolution, ddim_steps, scale, seed, eta)
-
-def process_scribble(input_image, prompt, num_samples, image_resolution, ddim_steps, scale, seed, eta):
-
- with torch.no_grad():
- img = resize_image(HWC3(input_image), image_resolution)
- H, W, C = img.shape
-
- detected_map = np.zeros_like(img, dtype=np.uint8)
- detected_map[np.min(img, axis=2) < 127] = 255
-
- control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
- control = torch.stack([control for _ in range(num_samples)], dim=0)
- control = einops.rearrange(control, 'b h w c -> b c h w').clone()
-
- seed_everything(seed)
-
- if save_memory:
- scribble_model.low_vram_shift(is_diffusing=False)
-
- cond = {"c_concat": [control], "c_crossattn": [scribble_model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
- un_cond = {"c_concat": [control], "c_crossattn": [scribble_model.get_learned_conditioning([n_prompt] * num_samples)]}
- shape = (4, H // 8, W // 8)
-
- if save_memory:
- scribble_model.low_vram_shift(is_diffusing=False)
-
- samples, intermediates = ddim_sampler_scribble.sample(ddim_steps, num_samples,
- shape, cond, verbose=False, eta=eta,
- unconditional_guidance_scale=scale,
- unconditional_conditioning=un_cond)
-
- if save_memory:
- scribble_model.low_vram_shift(is_diffusing=False)
-
- x_samples = scribble_model.decode_first_stage(samples)
- x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
-
- results = [x_samples[i] for i in range(num_samples)]
- return [255 - detected_map] + results
-
-
-def create_canvas(w, h):
- new_control_options = ["Interactive Scribble"]
- return np.zeros(shape=(h, w, 3), dtype=np.uint8) + 255
-
-
-block = gr.Blocks().queue()
-control_task_list = [
- "Scribble"
-]
-
-a_prompt = 'best quality, extremely detailed, architecture render, photorealistic, hyper realistic, surreal, dali, 3d rendering, render, 8k, 16k, extremely detailed, unreal engine, octane, maya'
-n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, pubic hair,extra digit, number, text, watermark, fewer digits, cropped, worst quality, low quality'
-
-with block:
- gr.Markdown("## ControlNet - Architectural Sketch to Render Image")
- gr.HTML('''
-
- Demo for ControlNet, Optimized for architectural sketch, based on lllyasviel ControlNet implementation.
-
- ''')
- gr.HTML('''
-
- HF Space created by Thaweewat Rugsujarit, If you have any suggestions or feedback, please feel free to contact me via Linkedin .
-
- ''')
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy")
- input_control = gr.Dropdown(control_task_list, value="Scribble", label="Task")
- prompt = gr.Textbox(label="Architectural Style")
- run_button = gr.Button(label="Run")
-
- with gr.Accordion("Advanced options", open=False):
- num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
- image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=256)
- low_threshold = gr.Slider(label="Canny low threshold", minimum=1, maximum=255, value=100, step=1)
- high_threshold = gr.Slider(label="Canny high threshold", minimum=1, maximum=255, value=200, step=1)
- ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
- scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
- seed = gr.Slider(label="Seed", minimum=0, maximum=2147483647, step=1, randomize=True)
- eta = gr.Slider(label="eta (DDIM)", minimum=0.0,maximum =1.0, value=0.0, step=0.1)
-
- with gr.Column():
- result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
- ips = [input_image, prompt, input_control, num_samples, image_resolution, ddim_steps, scale, seed, eta, low_threshold, high_threshold]
- run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
- gr.Markdown("")
-
-block.launch(debug = True)
\ No newline at end of file
diff --git a/spaces/VIOD/Real-CUGAN/upcunet_v3.py b/spaces/VIOD/Real-CUGAN/upcunet_v3.py
deleted file mode 100644
index f7919a6cc9efe3b8af73a73e30825a4c7d7d76da..0000000000000000000000000000000000000000
--- a/spaces/VIOD/Real-CUGAN/upcunet_v3.py
+++ /dev/null
@@ -1,714 +0,0 @@
-import torch
-from torch import nn as nn
-from torch.nn import functional as F
-import os, sys
-import numpy as np
-
-root_path = os.path.abspath('.')
-sys.path.append(root_path)
-
-
-class SEBlock(nn.Module):
- def __init__(self, in_channels, reduction=8, bias=False):
- super(SEBlock, self).__init__()
- self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias)
- self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias)
-
- def forward(self, x):
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half()
- else:
- x0 = torch.mean(x, dim=(2, 3), keepdim=True)
- x0 = self.conv1(x0)
- x0 = F.relu(x0, inplace=True)
- x0 = self.conv2(x0)
- x0 = torch.sigmoid(x0)
- x = torch.mul(x, x0)
- return x
-
- def forward_mean(self, x, x0):
- x0 = self.conv1(x0)
- x0 = F.relu(x0, inplace=True)
- x0 = self.conv2(x0)
- x0 = torch.sigmoid(x0)
- x = torch.mul(x, x0)
- return x
-
-
-class UNetConv(nn.Module):
- def __init__(self, in_channels, mid_channels, out_channels, se):
- super(UNetConv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(in_channels, mid_channels, 3, 1, 0),
- nn.LeakyReLU(0.1, inplace=True),
- nn.Conv2d(mid_channels, out_channels, 3, 1, 0),
- nn.LeakyReLU(0.1, inplace=True),
- )
- if se:
- self.seblock = SEBlock(out_channels, reduction=8, bias=True)
- else:
- self.seblock = None
-
- def forward(self, x):
- z = self.conv(x)
- if self.seblock is not None:
- z = self.seblock(z)
- return z
-
-
-class UNet1(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet1, self).__init__()
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 128, 64, se=True)
- self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
- def forward_a(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x1, x2):
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
-
-class UNet1x3(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet1x3, self).__init__()
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 128, 64, se=True)
- self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
- def forward_a(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x1, x2):
- x2 = self.conv2_up(x2)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-4, -4, -4, -4))
- x3 = self.conv3(x1 + x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- z = self.conv_bottom(x3)
- return z
-
-
-class UNet2(nn.Module):
- def __init__(self, in_channels, out_channels, deconv):
- super(UNet2, self).__init__()
-
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
- self.conv2 = UNetConv(64, 64, 128, se=True)
- self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0)
- self.conv3 = UNetConv(128, 256, 128, se=True)
- self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0)
- self.conv4 = UNetConv(128, 64, 64, se=True)
- self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
- self.conv5 = nn.Conv2d(64, 64, 3, 1, 0)
-
- if deconv:
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
- else:
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
-
- for m in self.modules():
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, nn.Linear):
- nn.init.normal_(m.weight, 0, 0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def forward(self, x):
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2(x2)
-
- x3 = self.conv2_down(x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- x3 = self.conv3(x3)
- x3 = self.conv3_up(x3)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
-
- x2 = F.pad(x2, (-4, -4, -4, -4))
- x4 = self.conv4(x2 + x3)
- x4 = self.conv4_up(x4)
- x4 = F.leaky_relu(x4, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-16, -16, -16, -16))
- x5 = self.conv5(x1 + x4)
- x5 = F.leaky_relu(x5, 0.1, inplace=True)
-
- z = self.conv_bottom(x5)
- return z
-
- def forward_a(self, x): # conv234结尾有se
- x1 = self.conv1(x)
- x2 = self.conv1_down(x1)
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
- x2 = self.conv2.conv(x2)
- return x1, x2
-
- def forward_b(self, x2): # conv234结尾有se
- x3 = self.conv2_down(x2)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
- x3 = self.conv3.conv(x3)
- return x3
-
- def forward_c(self, x2, x3): # conv234结尾有se
- x3 = self.conv3_up(x3)
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
-
- x2 = F.pad(x2, (-4, -4, -4, -4))
- x4 = self.conv4.conv(x2 + x3)
- return x4
-
- def forward_d(self, x1, x4): # conv234结尾有se
- x4 = self.conv4_up(x4)
- x4 = F.leaky_relu(x4, 0.1, inplace=True)
-
- x1 = F.pad(x1, (-16, -16, -16, -16))
- x5 = self.conv5(x1 + x4)
- x5 = F.leaky_relu(x5, 0.1, inplace=True)
-
- z = self.conv_bottom(x5)
- return z
-
-
-class UpCunet2x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet2x, self).__init__()
- self.unet1 = UNet1(in_channels, out_channels, deconv=True)
- self.unet2 = UNet2(in_channels, out_channels, deconv=False)
-
- def forward(self, x, tile_mode): # 1.7G
- n, c, h0, w0 = x.shape
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 2 + 1) * 2
- pw = ((w0 - 1) // 2 + 1) * 2
- x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2]
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
- else:
- crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 36, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 36, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 36, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 36, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- opt_res_dict[i][j] = x_crop
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 36, crop_size[0]):
- for j in range(0, w - 36, crop_size[1]):
- res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2]
- return res #
-
-
-class UpCunet3x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet3x, self).__init__()
- self.unet1 = UNet1x3(in_channels, out_channels, deconv=True)
- self.unet2 = UNet2(in_channels, out_channels, deconv=False)
-
- def forward(self, x, tile_mode): # 1.7G
- n, c, h0, w0 = x.shape
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 4 + 1) * 4
- pw = ((w0 - 1) // 4 + 1) * 4
- x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3]
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
- crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除
- else:
- crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
- crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 28, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 28, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 28, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 28, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- opt_res_dict[i][j] = x_crop #
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 28, crop_size[0]):
- for j in range(0, w - 28, crop_size[1]):
- res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3]
- return res
-
-
-class UpCunet4x(nn.Module): # 完美tile,全程无损
- def __init__(self, in_channels=3, out_channels=3):
- super(UpCunet4x, self).__init__()
- self.unet1 = UNet1(in_channels, 64, deconv=True)
- self.unet2 = UNet2(64, 64, deconv=False)
- self.ps = nn.PixelShuffle(2)
- self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True)
-
- def forward(self, x, tile_mode):
- n, c, h0, w0 = x.shape
- x00 = x
- if (tile_mode == 0): # 不tile
- ph = ((h0 - 1) // 2 + 1) * 2
- pw = ((w0 - 1) // 2 + 1) * 2
- x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除
- x = self.unet1.forward(x)
- x0 = self.unet2.forward(x)
- x1 = F.pad(x, (-20, -20, -20, -20))
- x = torch.add(x0, x1)
- x = self.conv_final(x)
- x = F.pad(x, (-1, -1, -1, -1))
- x = self.ps(x)
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4]
- x += F.interpolate(x00, scale_factor=4, mode='nearest')
- return x
- elif (tile_mode == 1): # 对长边减半
- if (w0 >= h0):
- crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
- else:
- crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
- crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
- crop_size = (crop_size_h, crop_size_w) # 6.6G
- elif (tile_mode == 2): # hw都减半
- crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
- elif (tile_mode == 3): # hw都三分之一
- crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G
- elif (tile_mode == 4): # hw都四分之一
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
- x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect')
- n, c, h, w = x.shape
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- n_patch = 0
- tmp_dict = {}
- opt_res_dict = {}
- for i in range(0, h - 38, crop_size[0]):
- tmp_dict[i] = {}
- for j in range(0, w - 38, crop_size[1]):
- x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38]
- n, c1, h1, w1 = x_crop.shape
- tmp0, x_crop = self.unet1.forward_a(x_crop)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- n_patch += 1
- tmp_dict[i][j] = (tmp0, x_crop)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- tmp0, x_crop = tmp_dict[i][j]
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
- se_mean1 /= n_patch
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean0 = se_mean0.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
- tmp_x3 = self.unet2.forward_b(tmp_x2)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
- se_mean0 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
- se_mean0 /= n_patch
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
- if ("Half" in x.type()):
- se_mean1 = se_mean1.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
- else:
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
- se_mean1 += tmp_se_mean
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
- se_mean1 /= n_patch
- for i in range(0, h - 38, crop_size[0]):
- opt_res_dict[i] = {}
- for j in range(0, w - 38, crop_size[1]):
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
- x_crop = self.conv_final(x_crop)
- x_crop = F.pad(x_crop, (-1, -1, -1, -1))
- x_crop = self.ps(x_crop)
- opt_res_dict[i][j] = x_crop
- del tmp_dict
- torch.cuda.empty_cache()
- res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device)
- if ("Half" in x.type()):
- res = res.half()
- for i in range(0, h - 38, crop_size[0]):
- for j in range(0, w - 38, crop_size[1]):
- # print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape)
- res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j]
- del opt_res_dict
- torch.cuda.empty_cache()
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4]
- res += F.interpolate(x00, scale_factor=4, mode='nearest')
- return res #
-
-
-class RealWaifuUpScaler(object):
- def __init__(self, scale, weight_path, half, device):
- weight = torch.load(weight_path, map_location="cpu")
- self.model = eval("UpCunet%sx" % scale)()
- if (half == True):
- self.model = self.model.half().to(device)
- else:
- self.model = self.model.to(device)
- self.model.load_state_dict(weight, strict=True)
- self.model.eval()
- self.half = half
- self.device = device
-
- def np2tensor(self, np_frame):
- if (self.half == False):
- return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255
- else:
- return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255
-
- def tensor2np(self, tensor):
- if (self.half == False):
- return (
- np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0)))
- else:
- return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(),
- (1, 2, 0)))
-
- def __call__(self, frame, tile_mode):
- with torch.no_grad():
- tensor = self.np2tensor(frame)
- result = self.tensor2np(self.model(tensor, tile_mode))
- return result
-
-
-if __name__ == "__main__":
- ###########inference_img
- import time, cv2, sys
- from time import time as ttime
-
- for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3),
- ("weights_v3/up4x-latest-denoise3x.pth", 4)]:
- for tile_mode in [0, 1, 2, 3, 4]:
- upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0")
- input_dir = "%s/input_dir1" % root_path
- output_dir = "%s/opt-dir-all-test" % root_path
- os.makedirs(output_dir, exist_ok=True)
- for name in os.listdir(input_dir):
- print(name)
- tmp = name.split(".")
- inp_path = os.path.join(input_dir, name)
- suffix = tmp[-1]
- prefix = ".".join(tmp[:-1])
- tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
- print(inp_path, tmp_path)
- # 支持中文路径
- # os.link(inp_path, tmp_path)#win用硬链接
- os.symlink(inp_path, tmp_path) # linux用软链接
- frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]]
- t0 = ttime()
- result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1]
- t1 = ttime()
- print(prefix, "done", t1 - t0)
- tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
- cv2.imwrite(tmp_opt_path, result)
- n = 0
- while (1):
- if (n == 0):
- suffix = "_%sx_tile%s.png" % (scale, tile_mode)
- else:
- suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) #
- if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False):
- break
- else:
- n += 1
- final_opt_path = os.path.join(output_dir, prefix + suffix)
- os.rename(tmp_opt_path, final_opt_path)
- os.remove(tmp_path)
diff --git a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h b/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h
deleted file mode 100644
index b2b88e8c46f19b6db0933163e57ccdb51180f517..0000000000000000000000000000000000000000
--- a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h
+++ /dev/null
@@ -1,35 +0,0 @@
-/*!
-**************************************************************************************************
-* Deformable DETR
-* Copyright (c) 2020 SenseTime. All Rights Reserved.
-* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-**************************************************************************************************
-* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
-**************************************************************************************************
-*/
-
-#pragma once
-#include
-
-namespace groundingdino {
-
-at::Tensor
-ms_deform_attn_cpu_forward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const int im2col_step);
-
-std::vector
-ms_deform_attn_cpu_backward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const at::Tensor &grad_output,
- const int im2col_step);
-
-} // namespace groundingdino
diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/show_install.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/show_install.py
deleted file mode 100644
index b9e6cc3be84ed684ec6984b1a7cfe7b673a72c8d..0000000000000000000000000000000000000000
--- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/show_install.py
+++ /dev/null
@@ -1,8 +0,0 @@
-from ..script import *
-from .collect_env import *
-
-# Temporary POC for module-based script
-@call_parse
-def main(show_nvidia_smi:Param(opt=False, nargs='?', type=bool)=False):
- return show_install(show_nvidia_smi)
-
diff --git a/spaces/Yan233th/so-vits-svc-models/vdecoder/hifigan/utils.py b/spaces/Yan233th/so-vits-svc-models/vdecoder/hifigan/utils.py
deleted file mode 100644
index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000
--- a/spaces/Yan233th/so-vits-svc-models/vdecoder/hifigan/utils.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import glob
-import os
-import matplotlib
-import torch
-from torch.nn.utils import weight_norm
-# matplotlib.use("Agg")
-import matplotlib.pylab as plt
-
-
-def plot_spectrogram(spectrogram):
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
-
- fig.canvas.draw()
- plt.close()
-
- return fig
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def apply_weight_norm(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- weight_norm(m)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def save_checkpoint(filepath, obj):
- print("Saving checkpoint to {}".format(filepath))
- torch.save(obj, filepath)
- print("Complete.")
-
-
-def del_old_checkpoints(cp_dir, prefix, n_models=2):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern) # get checkpoint paths
- cp_list = sorted(cp_list)# sort by iter
- if len(cp_list) > n_models: # if more than n_models models are found
- for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models
- open(cp, 'w').close()# empty file contents
- os.unlink(cp)# delete file (move to trash when using Colab)
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return None
- return sorted(cp_list)[-1]
-
diff --git a/spaces/Yuliang/ECON/lib/pymafx/models/maf_extractor.py b/spaces/Yuliang/ECON/lib/pymafx/models/maf_extractor.py
deleted file mode 100644
index ffe4e73427e30848798df2f57e835a8b10ae2934..0000000000000000000000000000000000000000
--- a/spaces/Yuliang/ECON/lib/pymafx/models/maf_extractor.py
+++ /dev/null
@@ -1,272 +0,0 @@
-# This script is borrowed and extended from https://github.com/shunsukesaito/PIFu/blob/master/lib/model/SurfaceClassifier.py
-
-import logging
-
-import numpy as np
-import scipy
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from lib.pymafx.core import path_config
-from lib.pymafx.utils.geometry import projection
-
-logger = logging.getLogger(__name__)
-
-from lib.pymafx.utils.imutils import j2d_processing
-
-from .transformers.net_utils import PosEnSine
-from .transformers.transformer_basics import OurMultiheadAttention
-
-
-class TransformerDecoderUnit(nn.Module):
- def __init__(
- self, feat_dim, attri_dim=0, n_head=8, pos_en_flag=True, attn_type='softmax', P=None
- ):
- super(TransformerDecoderUnit, self).__init__()
- self.feat_dim = feat_dim
- self.attn_type = attn_type
- self.pos_en_flag = pos_en_flag
- self.P = P
-
- assert attri_dim == 0
- if self.pos_en_flag:
- pe_dim = 10
- self.pos_en = PosEnSine(pe_dim)
- else:
- pe_dim = 0
- self.attn = OurMultiheadAttention(
- feat_dim + attri_dim + pe_dim * 3, feat_dim + pe_dim * 3, feat_dim, n_head
- ) # cross-attention
-
- self.linear1 = nn.Conv2d(self.feat_dim, self.feat_dim, 1)
- self.linear2 = nn.Conv2d(self.feat_dim, self.feat_dim, 1)
- self.activation = nn.ReLU(inplace=True)
-
- self.norm = nn.BatchNorm2d(self.feat_dim)
-
- def forward(self, q, k, v, pos=None):
- if self.pos_en_flag:
- q_pos_embed = self.pos_en(q, pos)
- k_pos_embed = self.pos_en(k)
-
- q = torch.cat([q, q_pos_embed], dim=1)
- k = torch.cat([k, k_pos_embed], dim=1)
- # else:
- # q_pos_embed = 0
- # k_pos_embed = 0
-
- # cross-multi-head attention
- out = self.attn(q=q, k=k, v=v, attn_type=self.attn_type, P=self.P)[0]
-
- # feed forward
- out2 = self.linear2(self.activation(self.linear1(out)))
- out = out + out2
- out = self.norm(out)
-
- return out
-
-
-class Mesh_Sampler(nn.Module):
- ''' Mesh Up/Down-sampling
- '''
- def __init__(self, type='smpl', level=2, device=torch.device('cuda'), option=None):
- super().__init__()
-
- # downsample SMPL mesh and assign part labels
- if type == 'smpl':
- # from https://github.com/nkolot/GraphCMR/blob/master/data/mesh_downsampling.npz
- smpl_mesh_graph = np.load(
- path_config.SMPL_DOWNSAMPLING, allow_pickle=True, encoding='latin1'
- )
-
- A = smpl_mesh_graph['A']
- U = smpl_mesh_graph['U']
- D = smpl_mesh_graph['D'] # shape: (2,)
- elif type == 'mano':
- # from https://github.com/microsoft/MeshGraphormer/blob/main/src/modeling/data/mano_downsampling.npz
- mano_mesh_graph = np.load(
- path_config.MANO_DOWNSAMPLING, allow_pickle=True, encoding='latin1'
- )
-
- A = mano_mesh_graph['A']
- U = mano_mesh_graph['U']
- D = mano_mesh_graph['D'] # shape: (2,)
-
- # downsampling
- ptD = []
- for lv in range(len(D)):
- d = scipy.sparse.coo_matrix(D[lv])
- i = torch.LongTensor(np.array([d.row, d.col]))
- v = torch.FloatTensor(d.data)
- ptD.append(torch.sparse.FloatTensor(i, v, d.shape))
-
- # downsampling mapping from 6890 points to 431 points
- # ptD[0].to_dense() - Size: [1723, 6890] , [195, 778]
- # ptD[1].to_dense() - Size: [431, 1723] , [49, 195]
- if level == 2:
- Dmap = torch.matmul(ptD[1].to_dense(), ptD[0].to_dense()) # 6890 -> 431
- elif level == 1:
- Dmap = ptD[0].to_dense() #
- self.register_buffer('Dmap', Dmap)
-
- # upsampling
- ptU = []
- for lv in range(len(U)):
- d = scipy.sparse.coo_matrix(U[lv])
- i = torch.LongTensor(np.array([d.row, d.col]))
- v = torch.FloatTensor(d.data)
- ptU.append(torch.sparse.FloatTensor(i, v, d.shape))
-
- # upsampling mapping from 431 points to 6890 points
- # ptU[0].to_dense() - Size: [6890, 1723]
- # ptU[1].to_dense() - Size: [1723, 431]
- if level == 2:
- Umap = torch.matmul(ptU[0].to_dense(), ptU[1].to_dense()) # 431 -> 6890
- elif level == 1:
- Umap = ptU[0].to_dense() #
- self.register_buffer('Umap', Umap)
-
- def downsample(self, x):
- return torch.matmul(self.Dmap.unsqueeze(0), x) # [B, 431, 3]
-
- def upsample(self, x):
- return torch.matmul(self.Umap.unsqueeze(0), x) # [B, 6890, 3]
-
- def forward(self, x, mode='downsample'):
- if mode == 'downsample':
- return self.downsample(x)
- elif mode == 'upsample':
- return self.upsample(x)
-
-
-class MAF_Extractor(nn.Module):
- ''' Mesh-aligned Feature Extrator
- As discussed in the paper, we extract mesh-aligned features based on 2D projection of the mesh vertices.
- The features extrated from spatial feature maps will go through a MLP for dimension reduction.
- '''
- def __init__(
- self, filter_channels, device=torch.device('cuda'), iwp_cam_mode=True, option=None
- ):
- super().__init__()
-
- self.device = device
- self.filters = []
- self.num_views = 1
- self.last_op = nn.ReLU(True)
-
- self.iwp_cam_mode = iwp_cam_mode
-
- for l in range(0, len(filter_channels) - 1):
- if 0 != l:
- self.filters.append(
- nn.Conv1d(filter_channels[l] + filter_channels[0], filter_channels[l + 1], 1)
- )
- else:
- self.filters.append(nn.Conv1d(filter_channels[l], filter_channels[l + 1], 1))
-
- self.add_module("conv%d" % l, self.filters[l])
-
- # downsample SMPL mesh and assign part labels
- # from https://github.com/nkolot/GraphCMR/blob/master/data/mesh_downsampling.npz
- smpl_mesh_graph = np.load(
- path_config.SMPL_DOWNSAMPLING, allow_pickle=True, encoding='latin1'
- )
-
- A = smpl_mesh_graph['A']
- U = smpl_mesh_graph['U']
- D = smpl_mesh_graph['D'] # shape: (2,)
-
- # downsampling
- ptD = []
- for level in range(len(D)):
- d = scipy.sparse.coo_matrix(D[level])
- i = torch.LongTensor(np.array([d.row, d.col]))
- v = torch.FloatTensor(d.data)
- ptD.append(torch.sparse.FloatTensor(i, v, d.shape))
-
- # downsampling mapping from 6890 points to 431 points
- # ptD[0].to_dense() - Size: [1723, 6890]
- # ptD[1].to_dense() - Size: [431. 1723]
- Dmap = torch.matmul(ptD[1].to_dense(), ptD[0].to_dense()) # 6890 -> 431
- self.register_buffer('Dmap', Dmap)
-
- # upsampling
- ptU = []
- for level in range(len(U)):
- d = scipy.sparse.coo_matrix(U[level])
- i = torch.LongTensor(np.array([d.row, d.col]))
- v = torch.FloatTensor(d.data)
- ptU.append(torch.sparse.FloatTensor(i, v, d.shape))
-
- # upsampling mapping from 431 points to 6890 points
- # ptU[0].to_dense() - Size: [6890, 1723]
- # ptU[1].to_dense() - Size: [1723, 431]
- Umap = torch.matmul(ptU[0].to_dense(), ptU[1].to_dense()) # 431 -> 6890
- self.register_buffer('Umap', Umap)
-
- def reduce_dim(self, feature):
- '''
- Dimension reduction by multi-layer perceptrons
- :param feature: list of [B, C_s, N] point-wise features before dimension reduction
- :return: [B, C_p x N] concatantion of point-wise features after dimension reduction
- '''
- y = feature
- tmpy = feature
- for i, f in enumerate(self.filters):
- y = self._modules['conv' + str(i)](y if i == 0 else torch.cat([y, tmpy], 1))
- if i != len(self.filters) - 1:
- y = F.leaky_relu(y)
- if self.num_views > 1 and i == len(self.filters) // 2:
- y = y.view(-1, self.num_views, y.shape[1], y.shape[2]).mean(dim=1)
- tmpy = feature.view(-1, self.num_views, feature.shape[1],
- feature.shape[2]).mean(dim=1)
-
- y = self.last_op(y)
-
- # y = y.view(y.shape[0], -1)
-
- return y
-
- def sampling(self, points, im_feat=None, z_feat=None, add_att=False, reduce_dim=True):
- '''
- Given 2D points, sample the point-wise features for each point,
- the dimension of point-wise features will be reduced from C_s to C_p by MLP.
- Image features should be pre-computed before this call.
- :param points: [B, N, 2] image coordinates of points
- :im_feat: [B, C_s, H_s, W_s] spatial feature maps
- :return: [B, C_p x N] concatantion of point-wise features after dimension reduction
- '''
- # if im_feat is None:
- # im_feat = self.im_feat
-
- batch_size = im_feat.shape[0]
- point_feat = torch.nn.functional.grid_sample(
- im_feat, points.unsqueeze(2), align_corners=False
- )[..., 0]
-
- if reduce_dim:
- mesh_align_feat = self.reduce_dim(point_feat)
- return mesh_align_feat
- else:
- return point_feat
-
- def forward(self, p, im_feat, cam=None, add_att=False, reduce_dim=True, **kwargs):
- ''' Returns mesh-aligned features for the 3D mesh points.
- Args:
- p (tensor): [B, N_m, 3] mesh vertices
- im_feat (tensor): [B, C_s, H_s, W_s] spatial feature maps
- cam (tensor): [B, 3] camera
- Return:
- mesh_align_feat (tensor): [B, C_p x N_m] mesh-aligned features
- '''
- # if cam is None:
- # cam = self.cam
- p_proj_2d = projection(p, cam, retain_z=False, iwp_mode=self.iwp_cam_mode)
- if self.iwp_cam_mode:
- # Normalize keypoints to [-1,1]
- p_proj_2d = p_proj_2d / (224. / 2.)
- else:
- p_proj_2d = j2d_processing(p_proj_2d, cam['kps_transf'])
- mesh_align_feat = self.sampling(p_proj_2d, im_feat, add_att=add_att, reduce_dim=reduce_dim)
- return mesh_align_feat
diff --git a/spaces/abdvl/datahub_qa_bot/docs/rfc.md b/spaces/abdvl/datahub_qa_bot/docs/rfc.md
deleted file mode 100644
index 92578b76aa643f7488bc33d568f3223f16a6d291..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/rfc.md
+++ /dev/null
@@ -1,123 +0,0 @@
-# DataHub RFC Process
-
-## What is an RFC?
-
-The "RFC" (request for comments) process is intended to provide a consistent and controlled path for new features,
-significant modifications, or any other significant proposal to enter DataHub and its related frameworks.
-
-Many changes, including bug fixes and documentation improvements can be implemented and reviewed via the normal GitHub
-pull request workflow.
-
-Some changes though are "substantial", and we ask that these be put through a bit of a design process and produce a
-consensus among the DataHub core teams.
-
-## The RFC life-cycle
-
-An RFC goes through the following stages:
-
-- *Discussion* (Optional): Create an issue with the "RFC" label to have a more open ended, initial discussion around
-your proposal (useful if you don't have a concrete proposal yet). Consider posting to #rfc in [Slack](./slack.md)
-for more visibility.
-- *Pending*: when the RFC is submitted as a PR. Please add the "RFC" label to the PR.
-- *Active*: when an RFC PR is merged and undergoing implementation.
-- *Landed*: when an RFC's proposed changes are shipped in an actual release.
-- *Rejected*: when an RFC PR is closed without being merged.
-
-[Pending RFC List](https://github.com/datahub-project/rfcs/pulls?q=is%3Apr+is%3Aopen)
-
-## When to follow this process
-
-You need to follow this process if you intend to make "substantial" changes to any components in the DataHub git repo,
-their documentation, or any other projects under the purview of the DataHub core teams. What constitutes a "substantial"
-change is evolving based on community norms, but may include the following:
-
-- A new feature that creates new API surface area, and would require a feature flag if introduced.
-- The removal of features that already shipped as part of the release channel.
-- The introduction of new idiomatic usage or conventions, even if they do not include code changes to DataHub itself.
-
-Some changes do not require an RFC:
-
-- Rephrasing, reorganizing or refactoring
-- Addition or removal of warnings
-- Additions that strictly improve objective, numerical quality criteria (speedup)
-
-If you submit a pull request to implement a new, major feature without going through the RFC process, it may be closed
-with a polite request to submit an RFC first.
-
-## Gathering feedback before submitting
-
-It's often helpful to get feedback on your concept before diving into the level of API design detail required for an
-RFC. You may open an issue on this repo to start a high-level discussion, with the goal of eventually formulating an RFC
-pull request with the specific implementation design. We also highly recommend sharing drafts of RFCs in #rfc on the
-[DataHub Slack](./slack.md) for early feedback.
-
-## The process
-
-In short, to get a major feature added to DataHub, one must first get the RFC merged into the RFC repo as a markdown
-file. At that point the RFC is 'active' and may be implemented with the goal of eventual inclusion into DataHub.
-
-- Fork the [datahub-project/rfc repository](https://github.com/datahub-project/rfcs).
-- Copy the `000-template.md` template file to `rfc/active/000-my-feature.md`, where `my-feature` is more
-descriptive. Don't assign an RFC number yet.
-- Fill in the RFC. Put care into the details. *RFCs that do not present convincing motivation, demonstrate understanding
-of the impact of the design, or are disingenuous about the drawback or alternatives tend to be poorly-received.*
-- Submit a pull request. As a pull request the RFC will receive design feedback from the larger community, and the
-author should be prepared to revise it in response.
-- Update the pull request to add the number of the PR to the filename and add a link to the PR in the header of the RFC.
-- Build consensus and integrate feedback. RFCs that have broad support are much more likely to make progress than those
-that don't receive any comments.
-- Eventually, the DataHub team will decide whether the RFC is a candidate for inclusion.
-- RFCs that are candidates for inclusion will entire a "final comment period" lasting 7 days. The beginning of this
-period will be signaled with a comment and tag on the pull request. Furthermore, an announcement will be made in the
-\#rfc Slack channel for further visibility.
-- An RFC acan be modified based upon feedback from the DataHub team and community. Significant modifications may trigger
-a new final comment period.
-- An RFC may be rejected by the DataHub team after public discussion has settled and comments have been made summarizing
-the rationale for rejection. The RFC will enter a "final comment period to close" lasting 7 days. At the end of the "FCP
-to close" period, the PR will be closed.
-- An RFC author may withdraw their own RFC by closing it themselves. Please state the reason for the withdrawal.
-- An RFC may be accepted at the close of its final comment period. A DataHub team member will merge the RFC's associated
-pull request, at which point the RFC will become 'active'.
-
-
-## Details on Active RFCs
-
-Once an RFC becomes active then authors may implement it and submit the feature as a pull request to the DataHub repo.
-Becoming 'active' is not a rubber stamp, and in particular still does not mean the feature will ultimately be merged; it
-does mean that the core team has agreed to it in principle and are amenable to merging it.
-
-Furthermore, the fact that a given RFC has been accepted and is 'active' implies nothing about what priority is assigned
-to its implementation, nor whether anybody is currently working on it.
-
-Modifications to active RFC's can be done in followup PR's. We strive to write each RFC in a manner that it will reflect
-the final design of the feature; but the nature of the process means that we cannot expect every merged RFC to actually
-reflect what the end result will be at the time of the next major release; therefore we try to keep each RFC document
-somewhat in sync with the language feature as planned, tracking such changes via followup pull requests to the document.
-
-## Implementing an RFC
-
-The author of an RFC is not obligated to implement it. Of course, the RFC author (like any other developer) is welcome
-to post an implementation for review after the RFC has been accepted.
-
-An active RFC should have the link to the implementation PR(s) listed, if there are any. Feedback to the actual
-implementation should be conducted in the implementation PR instead of the original RFC PR.
-
-If you are interested in working on the implementation for an 'active' RFC, but cannot determine if someone else is
-already working on it, feel free to ask (e.g. by leaving a comment on the associated issue).
-
-## Implemented RFCs
-
-Once an RFC has finally be implemented, first off, congratulations! And thank you for your contribution! Second, to
-help track the status of the RFC, please make one final PR to move the RFC from `rfc/active` to
-`rfc/finished`.
-
-## Reviewing RFCs
-
-Most of the DataHub team will attempt to review some set of open RFC pull requests on a regular basis. If a DataHub
-team member believes an RFC PR is ready to be accepted into active status, they can approve the PR using GitHub's
-review feature to signal their approval of the RFCs.
-
-
-
-*DataHub's RFC process is inspired by many others, including [Vue.js](https://github.com/vuejs/rfcs) and
-[Ember](https://github.com/emberjs/rfcs).*
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/samplers/distributed_sampler.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/samplers/distributed_sampler.py
deleted file mode 100644
index cc61019484655ee2829f7908dc442caa20cf1d54..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/datasets/samplers/distributed_sampler.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import math
-
-import torch
-from torch.utils.data import DistributedSampler as _DistributedSampler
-
-
-class DistributedSampler(_DistributedSampler):
-
- def __init__(self,
- dataset,
- num_replicas=None,
- rank=None,
- shuffle=True,
- seed=0):
- super().__init__(
- dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- # for the compatibility from PyTorch 1.3+
- self.seed = seed if seed is not None else 0
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- if self.shuffle:
- g = torch.Generator()
- g.manual_seed(self.epoch + self.seed)
- indices = torch.randperm(len(self.dataset), generator=g).tolist()
- else:
- indices = torch.arange(len(self.dataset)).tolist()
-
- # add extra samples to make it evenly divisible
- # in case that indices is shorter than half of total_size
- indices = (indices *
- math.ceil(self.total_size / len(indices)))[:self.total_size]
- assert len(indices) == self.total_size
-
- # subsample
- indices = indices[self.rank:self.total_size:self.num_replicas]
- assert len(indices) == self.num_samples
-
- return iter(indices)
diff --git a/spaces/ai-guru/composer/app.py b/spaces/ai-guru/composer/app.py
deleted file mode 100644
index b9f2fdab0bf04b9e15860afcd531fdbef94494c0..0000000000000000000000000000000000000000
--- a/spaces/ai-guru/composer/app.py
+++ /dev/null
@@ -1,276 +0,0 @@
-# Copyright 2022 Tristan Behrens.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Lint as: python3
-
-from fastapi import BackgroundTasks, FastAPI
-from fastapi.staticfiles import StaticFiles
-from fastapi.responses import FileResponse
-from pydantic import BaseModel
-from PIL import Image
-import os
-import io
-import random
-import base64
-from time import time
-from statistics import mean
-from collections import OrderedDict
-import torch
-import wave
-from source.logging import create_logger
-from source.tokensequence import token_sequence_to_audio, token_sequence_to_image
-from source import constants
-from transformers import AutoTokenizer, AutoModelForCausalLM
-
-logger = create_logger(__name__)
-
-# Load the auth-token from authtoken.txt.
-auth_token = os.getenv("authtoken")
-
-# Loading the model and its tokenizer.
-logger.info("Loading tokenizer and model...")
-tokenizer = AutoTokenizer.from_pretrained(
- "ai-guru/lakhclean_mmmtrack_4bars_d-2048"
-)
-model = AutoModelForCausalLM.from_pretrained(
- "ai-guru/lakhclean_mmmtrack_4bars_d-2048"
-)
-logger.info("Done.")
-
-
-# Create the app
-logger.info("Creating app...")
-app = FastAPI(docs_url=None, redoc_url=None)
-app.mount("/static", StaticFiles(directory="static"), name="static")
-logger.info("Done.")
-
-
-class Options(BaseModel):
- music_style: str
- density: str
- temperature: str
-
-
-class NewTask(BaseModel):
- music_style = "synth"
- density = "medium"
- temperature = "medium"
-
-
-def get_place_in_queue(task_id):
- queued_tasks = list(
- task
- for task in tasks.values()
- if task["status"] == "queued" or task["status"] == "processing"
- )
-
- queued_tasks.sort(key=lambda task: task["created_at"])
-
- queued_task_ids = list(task["task_id"] for task in queued_tasks)
-
- try:
- return queued_task_ids.index(task_id) + 1
- except:
- return 0
-
-
-def calculate_eta(task_id):
- total_durations = list(
- task["completed_at"] - task["started_at"]
- for task in tasks.values()
- if "completed_at" in task and task["status"] == "completed"
- )
-
- initial_place_in_queue = tasks[task_id]["initial_place_in_queue"]
-
- if len(total_durations):
- eta = initial_place_in_queue * mean(total_durations)
- else:
- eta = initial_place_in_queue * 35
-
- return round(eta, 1)
-
-
-def next_task(task_id):
- tasks[task_id]["completed_at"] = time()
-
- queued_tasks = list(task for task in tasks.values() if task["status"] == "queued")
-
- if queued_tasks:
- print(
- f"{task_id} {tasks[task_id]['status']}. Task/s remaining: {len(queued_tasks)}"
- )
- process_task(queued_tasks[0]["task_id"])
-
-
-def process_task(task_id):
- if "processing" in list(task["status"] for task in tasks.values()):
- return
-
- if tasks[task_id]["last_poll"] and time() - tasks[task_id]["last_poll"] > 30:
- tasks[task_id]["status"] = "abandoned"
- next_task(task_id)
-
- tasks[task_id]["status"] = "processing"
- tasks[task_id]["started_at"] = time()
- print(f"Processing {task_id}")
-
- try:
- tasks[task_id]["output"] = compose(
- tasks[task_id]["music_style"],
- tasks[task_id]["density"],
- tasks[task_id]["temperature"],
- )
- except Exception as ex:
- tasks[task_id]["status"] = "failed"
- tasks[task_id]["error"] = repr(ex)
- else:
- tasks[task_id]["status"] = "completed"
- finally:
- next_task(task_id)
-
-
-def compose(music_style, density, temperature):
- instruments = constants.get_instruments(music_style)
- density = constants.get_density(density)
- temperature = constants.get_temperature(temperature)
- print(f"instruments: {instruments} density: {density} temperature: {temperature}")
-
- # Generate with the given parameters.
- logger.info(f"Generating token sequence...")
- generated_sequence = generate_sequence(instruments, density, temperature)
- logger.info(f"Generated token sequence: {generated_sequence}")
-
- # Get the audio data as a array of int16.
- logger.info("Generating audio...")
- sample_rate, audio_data = token_sequence_to_audio(generated_sequence)
- logger.info(f"Done. Audio data: {len(audio_data)}")
-
- # Encode the audio-data as wave file in memory. Use the wave module.
- audio_data_bytes = io.BytesIO()
- wave_file = wave.open(audio_data_bytes, "wb")
- wave_file.setframerate(sample_rate)
- wave_file.setnchannels(1)
- wave_file.setsampwidth(2)
- wave_file.writeframes(audio_data)
- wave_file.close()
-
- # Return the audio-data as a base64-encoded string.
- audio_data_bytes.seek(0)
- audio_data_base64 = base64.b64encode(audio_data_bytes.read()).decode("utf-8")
- audio_data_bytes.close()
-
- # Convert the audio data to an PIL image.
- image = token_sequence_to_image(generated_sequence)
-
- # Save PIL image to harddrive as PNG.
- logger.debug(f"Saving image to harddrive... {type(image)}")
- image_file_name = "compose.png"
- image.save(image_file_name, "PNG")
-
- # Save image to virtual file.
- img_io = io.BytesIO()
- image.save(img_io, "PNG", quality=70)
- img_io.seek(0)
-
- # Return the image as a base64-encoded string.
- image_data_base64 = base64.b64encode(img_io.read()).decode("utf-8")
- img_io.close()
-
- # Return.
- return {
- "tokens": generated_sequence,
- "audio": "data:audio/wav;base64," + audio_data_base64,
- "image": "data:image/png;base64," + image_data_base64,
- "status": "OK",
- }
-
-
-def generate_sequence(instruments, density, temperature):
- instruments = instruments[::]
- random.shuffle(instruments)
-
- generated_ids = tokenizer.encode("PIECE_START", return_tensors="pt")[0]
-
- for instrument in instruments:
- more_ids = tokenizer.encode(
- f"TRACK_START INST={instrument} DENSITY={density}", return_tensors="pt"
- )[0]
- generated_ids = torch.cat((generated_ids, more_ids))
- generated_ids = generated_ids.unsqueeze(0)
-
- generated_ids = model.generate(
- generated_ids,
- max_length=2048,
- do_sample=True,
- temperature=temperature,
- eos_token_id=tokenizer.encode("TRACK_END")[0],
- )[0]
-
- generated_sequence = tokenizer.decode(generated_ids)
- print("GENERATING COMPLETE")
- print(generate_sequence)
- return generated_sequence
-
-
-tasks = OrderedDict()
-
-# Route for the loading page.
-@app.head("/")
-@app.route("/")
-def index(request):
- return FileResponse(path="static/index.html", media_type="text/html")
-
-
-@app.post("/task/create")
-def create_task(background_tasks: BackgroundTasks, new_task: NewTask):
- created_at = time()
-
- task_id = f"{str(created_at)}_{new_task.music_style}"
-
- tasks[task_id] = OrderedDict(
- {
- "task_id": task_id,
- "status": "queued",
- "eta": None,
- "created_at": created_at,
- "started_at": None,
- "completed_at": None,
- "last_poll": None,
- "poll_count": 0,
- "initial_place_in_queue": None,
- "place_in_queue": None,
- "music_style": new_task.music_style,
- "density": new_task.density,
- "temperature": new_task.temperature,
- "output": None,
- }
- )
-
- tasks[task_id]["initial_place_in_queue"] = get_place_in_queue(task_id)
- tasks[task_id]["eta"] = calculate_eta(task_id)
-
- background_tasks.add_task(process_task, task_id)
-
- return tasks[task_id]
-
-
-@app.get("/task/poll")
-def poll_task(task_id: str):
- tasks[task_id]["place_in_queue"] = get_place_in_queue(task_id)
- tasks[task_id]["eta"] = calculate_eta(task_id)
- tasks[task_id]["last_poll"] = time()
- tasks[task_id]["poll_count"] += 1
-
- return tasks[task_id]
diff --git a/spaces/akhaliq/MT3/README.md b/spaces/akhaliq/MT3/README.md
deleted file mode 100644
index 19c9c8b4542f945e1cc5e9d4c7768e60616c78c7..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/MT3/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: MT3
-emoji: 🦀
-colorFrom: red
-colorTo: green
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/akshatsanghvi/Rice-Disease-Classifier/app.py b/spaces/akshatsanghvi/Rice-Disease-Classifier/app.py
deleted file mode 100644
index 40210dfd782ee5a697501a409445380485e398db..0000000000000000000000000000000000000000
--- a/spaces/akshatsanghvi/Rice-Disease-Classifier/app.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import streamlit as st
-from tensorflow import image
-from keras import models
-import numpy as np
-from PIL import Image
-import pandas as pd
-
-st.title("Rice Disease Classifier 🌾")
-
-desc = pd.read_csv("files/description.csv")
-model = models.load_model("models/0.3/model.h5")
-
-dis = list(desc.disease.values)
-
-def image_classifier(inp):
- try:
- inp = image.resize(inp, (256,256))
- inp = np.expand_dims(inp,0)
- pred= model.predict(inp)
- return dis[np.argmax(pred)] , f"Confidence - {round(max(pred[0])*100,2)}%"
- except:
- return "Healthy", "Confidence - 0%"
-
-def detail(pro):
- x = desc[desc["disease"]==pro]
- return list(x["hindi"])[0], list(x["desc"])[0], list(x["hndesc"])[0], list(x["pre"])[0], list(x["hnpre"])[0]
-
-
-cho = st.file_uploader("Upload Image From Gallery", type=['png','jpg','jpeg','webp'])
-img = ""
-
-if cho is not None:
- img = Image.open(cho)
-
-st.write("or")
-if st.button("Open Camera"):
- cam = st.camera_input("Take image")
- if cam is not None:
- img = Image.open(cam)
-
-
-if st.button("Detect"):
- col1,col2,col3 = st.columns(3)
- pro, conf = image_classifier(img)
- hin, des, hnd, pre, hnp = detail(pro)
- try:
- with col2:
- st.image(img)
- st.write("\n\n")
- st.header(pro)
- st.subheader(f"({hin})")
- st.subheader(conf)
- st.write("\n\n\n\n")
-
- st.subheader(f"Description :")
- st.write(des)
- st.write("\n\n")
- st.write(hnd)
- st.write("\n\n\n")
-
- st.subheader(f"Precautions :")
- st.write(pre)
- st.write("\n\n")
- st.write(hnp)
- except:
- with col2:
- st.subheader(":red[Enter Valid Input]")
diff --git a/spaces/alamin655/websurfx/public/templates/bar.html b/spaces/alamin655/websurfx/public/templates/bar.html
deleted file mode 100644
index 489b0756609e5d5bfc2ef0a8904eb19e740de996..0000000000000000000000000000000000000000
--- a/spaces/alamin655/websurfx/public/templates/bar.html
+++ /dev/null
@@ -1,3 +0,0 @@
-